Test Report: Docker_macOS 17936

                    
                      37a485e4feb148de92f40b101448d251106852cf:2024-02-16:33175
                    
                

Test fail (12/333)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (278.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0216 08:58:01.367929    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:58:29.051251    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:58:59.580701    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.585838    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.595955    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.616782    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.657651    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.739749    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.901427    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:00.221526    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:00.861682    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:02.141863    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:04.702003    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:09.822025    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:20.062016    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:40.541849    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:00:21.501892    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:01:43.452599    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m37.991142177s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-502000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-502000 in cluster ingress-addon-legacy-502000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 08:57:37.748120    5455 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:57:37.748381    5455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:57:37.748388    5455 out.go:304] Setting ErrFile to fd 2...
	I0216 08:57:37.748393    5455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:57:37.748583    5455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 08:57:37.750249    5455 out.go:298] Setting JSON to false
	I0216 08:57:37.773782    5455 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1628,"bootTime":1708101029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:57:37.773899    5455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:57:37.795594    5455 out.go:177] * [ingress-addon-legacy-502000] minikube v1.32.0 on Darwin 14.3.1
	I0216 08:57:37.837875    5455 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 08:57:37.837994    5455 notify.go:220] Checking for updates...
	I0216 08:57:37.859716    5455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:57:37.880441    5455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:57:37.901854    5455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:57:37.923627    5455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 08:57:37.944404    5455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 08:57:37.966036    5455 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:57:38.022545    5455 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:57:38.022712    5455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:57:38.129173    5455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-16 16:57:38.118799055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:57:38.171421    5455 out.go:177] * Using the docker driver based on user configuration
	I0216 08:57:38.192486    5455 start.go:299] selected driver: docker
	I0216 08:57:38.192504    5455 start.go:903] validating driver "docker" against <nil>
	I0216 08:57:38.192517    5455 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 08:57:38.196093    5455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:57:38.303796    5455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-16 16:57:38.294045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cg
roupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for
an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:57:38.303943    5455 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 08:57:38.304115    5455 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 08:57:38.325570    5455 out.go:177] * Using Docker Desktop driver with root privileges
	I0216 08:57:38.346414    5455 cni.go:84] Creating CNI manager for ""
	I0216 08:57:38.346449    5455 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 08:57:38.346465    5455 start_flags.go:323] config:
	{Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:57:38.368618    5455 out.go:177] * Starting control plane node ingress-addon-legacy-502000 in cluster ingress-addon-legacy-502000
	I0216 08:57:38.410397    5455 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 08:57:38.431485    5455 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 08:57:38.474324    5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 08:57:38.474418    5455 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 08:57:38.525329    5455 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 08:57:38.525353    5455 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 08:57:38.729727    5455 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0216 08:57:38.729751    5455 cache.go:56] Caching tarball of preloaded images
	I0216 08:57:38.729994    5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 08:57:38.751537    5455 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0216 08:57:38.794354    5455 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:57:39.354078    5455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0216 08:57:56.686139    5455 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:57:56.686349    5455 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:57:57.328634    5455 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0216 08:57:57.328873    5455 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json ...
	I0216 08:57:57.328897    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json: {Name:mkcf0f7ad907db6fa82502d38c90f22d7a31a393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:57:57.329647    5455 cache.go:194] Successfully downloaded all kic artifacts
	I0216 08:57:57.329679    5455 start.go:365] acquiring machines lock for ingress-addon-legacy-502000: {Name:mkaa184d9ec1a667ce31139c0cb669fd5169a0b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 08:57:57.329928    5455 start.go:369] acquired machines lock for "ingress-addon-legacy-502000" in 212.988µs
	I0216 08:57:57.329971    5455 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 08:57:57.330071    5455 start.go:125] createHost starting for "" (driver="docker")
	I0216 08:57:57.362977    5455 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0216 08:57:57.363311    5455 start.go:159] libmachine.API.Create for "ingress-addon-legacy-502000" (driver="docker")
	I0216 08:57:57.363363    5455 client.go:168] LocalClient.Create starting
	I0216 08:57:57.363952    5455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem
	I0216 08:57:57.364386    5455 main.go:141] libmachine: Decoding PEM data...
	I0216 08:57:57.364415    5455 main.go:141] libmachine: Parsing certificate...
	I0216 08:57:57.364516    5455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem
	I0216 08:57:57.364873    5455 main.go:141] libmachine: Decoding PEM data...
	I0216 08:57:57.364889    5455 main.go:141] libmachine: Parsing certificate...
	I0216 08:57:57.385100    5455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-502000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 08:57:57.438198    5455 cli_runner.go:211] docker network inspect ingress-addon-legacy-502000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 08:57:57.438321    5455 network_create.go:281] running [docker network inspect ingress-addon-legacy-502000] to gather additional debugging logs...
	I0216 08:57:57.438342    5455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-502000
	W0216 08:57:57.490978    5455 cli_runner.go:211] docker network inspect ingress-addon-legacy-502000 returned with exit code 1
	I0216 08:57:57.491013    5455 network_create.go:284] error running [docker network inspect ingress-addon-legacy-502000]: docker network inspect ingress-addon-legacy-502000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-502000 not found
	I0216 08:57:57.491031    5455 network_create.go:286] output of [docker network inspect ingress-addon-legacy-502000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-502000 not found
	
	** /stderr **
	I0216 08:57:57.491179    5455 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 08:57:57.543835    5455 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020c0b90}
	I0216 08:57:57.543871    5455 network_create.go:124] attempt to create docker network ingress-addon-legacy-502000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0216 08:57:57.543948    5455 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 ingress-addon-legacy-502000
	I0216 08:57:57.636009    5455 network_create.go:108] docker network ingress-addon-legacy-502000 192.168.49.0/24 created
	I0216 08:57:57.636078    5455 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-502000" container
	I0216 08:57:57.636223    5455 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 08:57:57.689437    5455 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-502000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --label created_by.minikube.sigs.k8s.io=true
	I0216 08:57:57.742591    5455 oci.go:103] Successfully created a docker volume ingress-addon-legacy-502000
	I0216 08:57:57.742717    5455 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-502000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --entrypoint /usr/bin/test -v ingress-addon-legacy-502000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 08:57:58.197156    5455 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-502000
	I0216 08:57:58.197190    5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 08:57:58.197202    5455 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 08:57:58.197320    5455 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-502000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 08:58:00.964194    5455 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-502000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.766844965s)
	I0216 08:58:00.964225    5455 kic.go:203] duration metric: took 2.767060 seconds to extract preloaded images to volume
	I0216 08:58:00.964355    5455 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 08:58:01.077477    5455 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-502000 --name ingress-addon-legacy-502000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --network ingress-addon-legacy-502000 --ip 192.168.49.2 --volume ingress-addon-legacy-502000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 08:58:01.392926    5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Running}}
	I0216 08:58:01.451770    5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 08:58:01.513261    5455 cli_runner.go:164] Run: docker exec ingress-addon-legacy-502000 stat /var/lib/dpkg/alternatives/iptables
	I0216 08:58:01.624531    5455 oci.go:144] the created container "ingress-addon-legacy-502000" has a running status.
	I0216 08:58:01.624575    5455 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa...
	I0216 08:58:01.696043    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0216 08:58:01.696196    5455 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 08:58:01.769109    5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 08:58:01.829255    5455 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 08:58:01.829302    5455 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-502000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 08:58:01.950859    5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 08:58:02.008225    5455 machine.go:88] provisioning docker machine ...
	I0216 08:58:02.008278    5455 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-502000"
	I0216 08:58:02.008395    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:02.067425    5455 main.go:141] libmachine: Using SSH client type: native
	I0216 08:58:02.067769    5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 50597 <nil> <nil>}
	I0216 08:58:02.067786    5455 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-502000 && echo "ingress-addon-legacy-502000" | sudo tee /etc/hostname
	I0216 08:58:02.233550    5455 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-502000
	
	I0216 08:58:02.233636    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:02.290088    5455 main.go:141] libmachine: Using SSH client type: native
	I0216 08:58:02.290373    5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 50597 <nil> <nil>}
	I0216 08:58:02.290388    5455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-502000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-502000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-502000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 08:58:02.430074    5455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 08:58:02.430097    5455 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 08:58:02.430117    5455 ubuntu.go:177] setting up certificates
	I0216 08:58:02.430125    5455 provision.go:83] configureAuth start
	I0216 08:58:02.430182    5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
	I0216 08:58:02.486559    5455 provision.go:138] copyHostCerts
	I0216 08:58:02.486632    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 08:58:02.486735    5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 08:58:02.486742    5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 08:58:02.486894    5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 08:58:02.487086    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 08:58:02.487156    5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 08:58:02.487161    5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 08:58:02.487279    5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 08:58:02.487494    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 08:58:02.487565    5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 08:58:02.487571    5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 08:58:02.487716    5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 08:58:02.488172    5455 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-502000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-502000]
	I0216 08:58:02.604837    5455 provision.go:172] copyRemoteCerts
	I0216 08:58:02.605120    5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 08:58:02.605183    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:02.661178    5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 08:58:02.765895    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0216 08:58:02.777605    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 08:58:02.823382    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0216 08:58:02.823468    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0216 08:58:02.867130    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0216 08:58:02.867295    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 08:58:02.911459    5455 provision.go:86] duration metric: configureAuth took 481.322673ms
	I0216 08:58:02.911476    5455 ubuntu.go:193] setting minikube options for container-runtime
	I0216 08:58:02.911638    5455 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 08:58:02.911715    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:02.967022    5455 main.go:141] libmachine: Using SSH client type: native
	I0216 08:58:02.967334    5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 50597 <nil> <nil>}
	I0216 08:58:02.967351    5455 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 08:58:03.110293    5455 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 08:58:03.110310    5455 ubuntu.go:71] root file system type: overlay
	I0216 08:58:03.110386    5455 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 08:58:03.110469    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:03.218860    5455 main.go:141] libmachine: Using SSH client type: native
	I0216 08:58:03.219264    5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 50597 <nil> <nil>}
	I0216 08:58:03.219327    5455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 08:58:03.388450    5455 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 08:58:03.388666    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:03.443922    5455 main.go:141] libmachine: Using SSH client type: native
	I0216 08:58:03.444244    5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 50597 <nil> <nil>}
	I0216 08:58:03.444259    5455 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 08:58:04.115332    5455 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 16:58:03.383626691 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 08:58:04.115358    5455 machine.go:91] provisioned docker machine in 2.107136588s
	I0216 08:58:04.115366    5455 client.go:171] LocalClient.Create took 6.752093637s
	I0216 08:58:04.115383    5455 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-502000" took 6.75217795s
	I0216 08:58:04.115394    5455 start.go:300] post-start starting for "ingress-addon-legacy-502000" (driver="docker")
	I0216 08:58:04.115401    5455 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 08:58:04.115468    5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 08:58:04.115535    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:04.169442    5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 08:58:04.273350    5455 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 08:58:04.277814    5455 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 08:58:04.277843    5455 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 08:58:04.277851    5455 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 08:58:04.277856    5455 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 08:58:04.277867    5455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 08:58:04.277971    5455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 08:58:04.278415    5455 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 08:58:04.278423    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> /etc/ssl/certs/21512.pem
	I0216 08:58:04.278664    5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 08:58:04.295330    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 08:58:04.338586    5455 start.go:303] post-start completed in 223.186121ms
	I0216 08:58:04.339186    5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
	I0216 08:58:04.394625    5455 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json ...
	I0216 08:58:04.395682    5455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 08:58:04.395748    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:04.448006    5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 08:58:04.541666    5455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 08:58:04.547340    5455 start.go:128] duration metric: createHost completed in 7.217361727s
	I0216 08:58:04.547355    5455 start.go:83] releasing machines lock for "ingress-addon-legacy-502000", held for 7.217502013s
	I0216 08:58:04.547435    5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
	I0216 08:58:04.601820    5455 ssh_runner.go:195] Run: cat /version.json
	I0216 08:58:04.601898    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:04.602424    5455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 08:58:04.602766    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:04.659083    5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 08:58:04.659095    5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 08:58:04.866405    5455 ssh_runner.go:195] Run: systemctl --version
	I0216 08:58:04.871337    5455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 08:58:04.876819    5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 08:58:04.921333    5455 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 08:58:04.921471    5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 08:58:04.954287    5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 08:58:04.985340    5455 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 08:58:04.985389    5455 start.go:475] detecting cgroup driver to use...
	I0216 08:58:04.985408    5455 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 08:58:04.985555    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 08:58:05.016154    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0216 08:58:05.034331    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 08:58:05.053270    5455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 08:58:05.053324    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 08:58:05.070568    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 08:58:05.088143    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 08:58:05.106015    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 08:58:05.122714    5455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 08:58:05.140723    5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 08:58:05.158387    5455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 08:58:05.173464    5455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 08:58:05.191352    5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 08:58:05.257009    5455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 08:58:05.353568    5455 start.go:475] detecting cgroup driver to use...
	I0216 08:58:05.353589    5455 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 08:58:05.353656    5455 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 08:58:05.374328    5455 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 08:58:05.374407    5455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 08:58:05.396963    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 08:58:05.427964    5455 ssh_runner.go:195] Run: which cri-dockerd
	I0216 08:58:05.433157    5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 08:58:05.450919    5455 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 08:58:05.485293    5455 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 08:58:05.591548    5455 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 08:58:05.662182    5455 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 08:58:05.662282    5455 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 08:58:05.692438    5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 08:58:05.759831    5455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 08:58:06.029300    5455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 08:58:06.052250    5455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 08:58:06.120286    5455 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0216 08:58:06.120464    5455 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-502000 dig +short host.docker.internal
	I0216 08:58:06.222503    5455 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 08:58:06.222965    5455 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 08:58:06.227733    5455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 08:58:06.247303    5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 08:58:06.300395    5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 08:58:06.300481    5455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 08:58:06.318382    5455 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 08:58:06.318402    5455 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 08:58:06.318468    5455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 08:58:06.334311    5455 ssh_runner.go:195] Run: which lz4
	I0216 08:58:06.339327    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0216 08:58:06.339964    5455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 08:58:06.344359    5455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 08:58:06.344380    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0216 08:58:13.462734    5455 docker.go:649] Took 7.123434 seconds to copy over tarball
	I0216 08:58:13.462809    5455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 08:58:15.244006    5455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.781199765s)
	I0216 08:58:15.244027    5455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 08:58:15.300372    5455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 08:58:15.315656    5455 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0216 08:58:15.346394    5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 08:58:15.411323    5455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 08:58:16.753285    5455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.341944187s)
	I0216 08:58:16.753372    5455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 08:58:16.770484    5455 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 08:58:16.770498    5455 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 08:58:16.770513    5455 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 08:58:16.775138    5455 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 08:58:16.775266    5455 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 08:58:16.775463    5455 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 08:58:16.775937    5455 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 08:58:16.776268    5455 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 08:58:16.776387    5455 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0216 08:58:16.776995    5455 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0216 08:58:16.777101    5455 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0216 08:58:16.780811    5455 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 08:58:16.781618    5455 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 08:58:16.781686    5455 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 08:58:16.783313    5455 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0216 08:58:16.783872    5455 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 08:58:16.783919    5455 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0216 08:58:16.783938    5455 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 08:58:16.784049    5455 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0216 08:58:18.779701    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0216 08:58:18.797477    5455 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0216 08:58:18.797516    5455 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0216 08:58:18.797576    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0216 08:58:18.814734    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0216 08:58:18.853761    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0216 08:58:18.872260    5455 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0216 08:58:18.872289    5455 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 08:58:18.872353    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0216 08:58:18.889877    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0216 08:58:18.901292    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0216 08:58:18.918480    5455 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0216 08:58:18.918507    5455 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 08:58:18.918571    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0216 08:58:18.932040    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 08:58:18.932617    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0216 08:58:18.933711    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0216 08:58:18.935865    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0216 08:58:18.944765    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0216 08:58:18.955047    5455 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0216 08:58:18.955073    5455 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 08:58:18.955094    5455 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0216 08:58:18.955110    5455 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0216 08:58:18.955128    5455 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0216 08:58:18.955142    5455 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 08:58:18.955154    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0216 08:58:18.955156    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 08:58:18.955199    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0216 08:58:18.969357    5455 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0216 08:58:18.969416    5455 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0216 08:58:18.969535    5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0216 08:58:18.991832    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0216 08:58:18.993332    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0216 08:58:18.993352    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0216 08:58:18.998487    5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0216 08:58:19.399112    5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 08:58:19.417822    5455 cache_images.go:92] LoadImages completed in 2.647334221s
	W0216 08:58:19.417867    5455 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0216 08:58:19.417945    5455 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 08:58:19.467564    5455 cni.go:84] Creating CNI manager for ""
	I0216 08:58:19.467589    5455 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 08:58:19.467608    5455 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 08:58:19.467625    5455 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-502000 NodeName:ingress-addon-legacy-502000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 08:58:19.467782    5455 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-502000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 08:58:19.467857    5455 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-502000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 08:58:19.467937    5455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0216 08:58:19.483719    5455 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 08:58:19.483771    5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 08:58:19.499674    5455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0216 08:58:19.528495    5455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0216 08:58:19.559535    5455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0216 08:58:19.590356    5455 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0216 08:58:19.594800    5455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 08:58:19.612675    5455 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000 for IP: 192.168.49.2
	I0216 08:58:19.612732    5455 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.613268    5455 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 08:58:19.613557    5455 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 08:58:19.613608    5455 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key
	I0216 08:58:19.613623    5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt with IP's: []
	I0216 08:58:19.794216    5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt ...
	I0216 08:58:19.794231    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt: {Name:mk007431836d8995fd7c22de8c14850cae5ca9ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.794554    5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key ...
	I0216 08:58:19.794564    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key: {Name:mkeb539da9b3168b95889f91a7453b7d5c2b2e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.794794    5455 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2
	I0216 08:58:19.794811    5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 08:58:19.846807    5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 ...
	I0216 08:58:19.846818    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2: {Name:mk5480c7b30447a8a0f8b617cf7dff4aab9c8c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.847079    5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2 ...
	I0216 08:58:19.847087    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2: {Name:mk31590b01b058cbf0eca75dfc306771ef7085cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.847281    5455 certs.go:337] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt
	I0216 08:58:19.847484    5455 certs.go:341] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key
	I0216 08:58:19.847647    5455 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key
	I0216 08:58:19.847660    5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt with IP's: []
	I0216 08:58:19.955161    5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt ...
	I0216 08:58:19.955174    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt: {Name:mk9a5f2a0bdda23065003abebaa4a93798b37f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.955438    5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key ...
	I0216 08:58:19.955447    5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key: {Name:mk160ac2947238b07d42e0a7d5fdc070ffd4f536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:58:19.955644    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0216 08:58:19.955676    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0216 08:58:19.955699    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0216 08:58:19.955717    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0216 08:58:19.955736    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0216 08:58:19.955755    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0216 08:58:19.955772    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0216 08:58:19.955788    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0216 08:58:19.955889    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 08:58:19.956193    5455 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 08:58:19.956205    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 08:58:19.956247    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 08:58:19.956285    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 08:58:19.956323    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 08:58:19.956415    5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 08:58:19.956458    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> /usr/share/ca-certificates/21512.pem
	I0216 08:58:19.956482    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0216 08:58:19.956499    5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem -> /usr/share/ca-certificates/2151.pem
	I0216 08:58:19.956956    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 08:58:20.002332    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 08:58:20.044280    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 08:58:20.086480    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 08:58:20.128595    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 08:58:20.171059    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 08:58:20.214063    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 08:58:20.257675    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 08:58:20.299688    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 08:58:20.344264    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 08:58:20.386004    5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 08:58:20.427242    5455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 08:58:20.459629    5455 ssh_runner.go:195] Run: openssl version
	I0216 08:58:20.465963    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 08:58:20.483280    5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 08:58:20.487929    5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 08:58:20.487970    5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 08:58:20.494921    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 08:58:20.511663    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 08:58:20.527629    5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 08:58:20.532429    5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 08:58:20.532475    5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 08:58:20.539939    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 08:58:20.556322    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 08:58:20.574627    5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 08:58:20.579388    5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 08:58:20.579431    5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 08:58:20.586713    5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 08:58:20.603116    5455 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 08:58:20.607406    5455 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 08:58:20.607450    5455 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:58:20.607547    5455 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 08:58:20.624113    5455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 08:58:20.640900    5455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 08:58:20.656837    5455 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 08:58:20.656901    5455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 08:58:20.671988    5455 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 08:58:20.672014    5455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 08:58:20.728634    5455 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 08:58:20.728675    5455 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 08:58:20.978518    5455 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 08:58:20.978636    5455 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 08:58:20.978740    5455 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 08:58:21.140287    5455 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 08:58:21.141115    5455 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 08:58:21.141156    5455 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 08:58:21.224519    5455 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 08:58:21.267372    5455 out.go:204]   - Generating certificates and keys ...
	I0216 08:58:21.267475    5455 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 08:58:21.267559    5455 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 08:58:21.286891    5455 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 08:58:21.598643    5455 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 08:58:21.746290    5455 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 08:58:21.888216    5455 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 08:58:22.015562    5455 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 08:58:22.015788    5455 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 08:58:22.106306    5455 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 08:58:22.106414    5455 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 08:58:22.179275    5455 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 08:58:22.241942    5455 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 08:58:22.528004    5455 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 08:58:22.528101    5455 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 08:58:22.581010    5455 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 08:58:22.695276    5455 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 08:58:22.905791    5455 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 08:58:23.016194    5455 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 08:58:23.017585    5455 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 08:58:23.038369    5455 out.go:204]   - Booting up control plane ...
	I0216 08:58:23.038474    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 08:58:23.038551    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 08:58:23.038642    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 08:58:23.038755    5455 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 08:58:23.038944    5455 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 08:59:03.027299    5455 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 08:59:03.027705    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 08:59:03.027866    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 08:59:08.028602    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 08:59:08.028765    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 08:59:18.029387    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 08:59:18.029549    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 08:59:38.030231    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 08:59:38.030373    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:00:18.030705    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:00:18.030923    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:00:18.030943    5455 kubeadm.go:322] 
	I0216 09:00:18.030976    5455 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 09:00:18.031029    5455 kubeadm.go:322] 		timed out waiting for the condition
	I0216 09:00:18.031046    5455 kubeadm.go:322] 
	I0216 09:00:18.031088    5455 kubeadm.go:322] 	This error is likely caused by:
	I0216 09:00:18.031119    5455 kubeadm.go:322] 		- The kubelet is not running
	I0216 09:00:18.031218    5455 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:00:18.031227    5455 kubeadm.go:322] 
	I0216 09:00:18.031304    5455 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:00:18.031352    5455 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 09:00:18.031402    5455 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 09:00:18.031423    5455 kubeadm.go:322] 
	I0216 09:00:18.031549    5455 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:00:18.031644    5455 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 09:00:18.031711    5455 kubeadm.go:322] 
	I0216 09:00:18.031882    5455 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:00:18.031945    5455 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:00:18.032054    5455 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 09:00:18.032121    5455 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 09:00:18.032151    5455 kubeadm.go:322] 
	I0216 09:00:18.036609    5455 kubeadm.go:322] W0216 16:58:20.728027    1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 09:00:18.036766    5455 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:00:18.036851    5455 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:00:18.036954    5455 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 09:00:18.037043    5455 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:00:18.037150    5455 kubeadm.go:322] W0216 16:58:23.021805    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 09:00:18.037257    5455 kubeadm.go:322] W0216 16:58:23.022723    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 09:00:18.037331    5455 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:00:18.037398    5455 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 09:00:18.037480    5455 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:58:20.728027    1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:58:23.021805    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:58:23.022723    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:58:20.728027    1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:58:23.021805    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:58:23.022723    1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 09:00:18.037519    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:00:18.571956    5455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:00:18.589353    5455 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:00:18.589404    5455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:00:18.605150    5455 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:00:18.605204    5455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:00:18.672441    5455 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 09:00:18.672510    5455 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:00:18.922731    5455 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:00:18.922830    5455 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:00:18.922919    5455 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:00:19.119860    5455 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:00:19.120910    5455 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:00:19.120948    5455 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 09:00:19.205192    5455 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:00:19.247548    5455 out.go:204]   - Generating certificates and keys ...
	I0216 09:00:19.247642    5455 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:00:19.247706    5455 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:00:19.247789    5455 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:00:19.247864    5455 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:00:19.247932    5455 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:00:19.247990    5455 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:00:19.248066    5455 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:00:19.248119    5455 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:00:19.248195    5455 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:00:19.248259    5455 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:00:19.248291    5455 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:00:19.248340    5455 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:00:19.472915    5455 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:00:19.642944    5455 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:00:19.925706    5455 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:00:20.124250    5455 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:00:20.125142    5455 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:00:20.145545    5455 out.go:204]   - Booting up control plane ...
	I0216 09:00:20.145724    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:00:20.145847    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:00:20.145973    5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:00:20.146108    5455 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:00:20.146368    5455 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:01:00.145396    5455 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:01:00.146078    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:01:00.146303    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:01:05.153564    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:01:05.153731    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:01:15.161939    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:01:15.162096    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:01:35.169255    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:01:35.169418    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:02:15.172751    5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:02:15.173057    5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:02:15.173080    5455 kubeadm.go:322] 
	I0216 09:02:15.173132    5455 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 09:02:15.173172    5455 kubeadm.go:322] 		timed out waiting for the condition
	I0216 09:02:15.173181    5455 kubeadm.go:322] 
	I0216 09:02:15.173238    5455 kubeadm.go:322] 	This error is likely caused by:
	I0216 09:02:15.173289    5455 kubeadm.go:322] 		- The kubelet is not running
	I0216 09:02:15.173399    5455 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:02:15.173411    5455 kubeadm.go:322] 
	I0216 09:02:15.173503    5455 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:02:15.173530    5455 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 09:02:15.173561    5455 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 09:02:15.173572    5455 kubeadm.go:322] 
	I0216 09:02:15.173649    5455 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:02:15.173721    5455 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 09:02:15.173727    5455 kubeadm.go:322] 
	I0216 09:02:15.173794    5455 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:02:15.173831    5455 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:02:15.173893    5455 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 09:02:15.173920    5455 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 09:02:15.173927    5455 kubeadm.go:322] 
	I0216 09:02:15.178000    5455 kubeadm.go:322] W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 09:02:15.178148    5455 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:02:15.178203    5455 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:02:15.178312    5455 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 09:02:15.178407    5455 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:02:15.178512    5455 kubeadm.go:322] W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 09:02:15.178618    5455 kubeadm.go:322] W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 09:02:15.178680    5455 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:02:15.178741    5455 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 09:02:15.178769    5455 kubeadm.go:406] StartCluster complete in 3m54.54171177s
	I0216 09:02:15.179983    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:02:15.197604    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.197618    5455 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:02:15.197691    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:02:15.214710    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.214723    5455 logs.go:278] No container was found matching "etcd"
	I0216 09:02:15.214797    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:02:15.232954    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.232968    5455 logs.go:278] No container was found matching "coredns"
	I0216 09:02:15.233041    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:02:15.250798    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.250812    5455 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:02:15.250901    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:02:15.269104    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.269134    5455 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:02:15.269226    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:02:15.286229    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.286245    5455 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:02:15.286305    5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:02:15.303718    5455 logs.go:276] 0 containers: []
	W0216 09:02:15.303735    5455 logs.go:278] No container was found matching "kindnet"
	I0216 09:02:15.303746    5455 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:02:15.303766    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:02:15.366132    5455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:02:15.366144    5455 logs.go:123] Gathering logs for Docker ...
	I0216 09:02:15.366152    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:02:15.389489    5455 logs.go:123] Gathering logs for container status ...
	I0216 09:02:15.389507    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:02:15.456291    5455 logs.go:123] Gathering logs for kubelet ...
	I0216 09:02:15.456308    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:02:15.503191    5455 logs.go:123] Gathering logs for dmesg ...
	I0216 09:02:15.503210    5455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 09:02:15.524513    5455 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 09:02:15.524536    5455 out.go:239] * 
	* 
	W0216 09:02:15.524574    5455 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:02:15.524590    5455 out.go:239] * 
	* 
	W0216 09:02:15.525179    5455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 09:02:15.588604    5455 out.go:177] 
	W0216 09:02:15.630384    5455 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 17:00:18.672180    4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 17:00:20.129169    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 17:00:20.129911    4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:02:15.630464    5455 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 09:02:15.630491    5455 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 09:02:15.651589    5455 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (278.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (109.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-502000 addons enable ingress --alsologtostderr -v=5
E0216 09:03:01.396920    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:03:59.609942    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-502000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m48.453540499s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:02:15.827976    5795 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:02:15.828951    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:02:15.828958    5795 out.go:304] Setting ErrFile to fd 2...
	I0216 09:02:15.828963    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:02:15.829150    5795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:02:15.830163    5795 mustload.go:65] Loading cluster: ingress-addon-legacy-502000
	I0216 09:02:15.830805    5795 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:02:15.830820    5795 addons.go:597] checking whether the cluster is paused
	I0216 09:02:15.830901    5795 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:02:15.830917    5795 host.go:66] Checking if "ingress-addon-legacy-502000" exists ...
	I0216 09:02:15.831589    5795 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 09:02:15.973685    5795 ssh_runner.go:195] Run: systemctl --version
	I0216 09:02:15.973766    5795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 09:02:16.035169    5795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 09:02:16.129684    5795 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:02:16.172649    5795 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0216 09:02:16.193384    5795 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:02:16.193400    5795 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-502000"
	I0216 09:02:16.193410    5795 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-502000"
	I0216 09:02:16.193442    5795 host.go:66] Checking if "ingress-addon-legacy-502000" exists ...
	I0216 09:02:16.193822    5795 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 09:02:16.272440    5795 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0216 09:02:16.294586    5795 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0216 09:02:16.315350    5795 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 09:02:16.336511    5795 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 09:02:16.357640    5795 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0216 09:02:16.357658    5795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0216 09:02:16.357731    5795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 09:02:16.412756    5795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 09:02:16.531967    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:16.594496    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:16.594522    5795 retry.go:31] will retry after 130.826599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:16.726919    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:16.790609    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:16.790639    5795 retry.go:31] will retry after 216.644052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:17.007524    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:17.073466    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:17.073505    5795 retry.go:31] will retry after 780.903085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:17.854612    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:17.916608    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:17.916626    5795 retry.go:31] will retry after 1.173527428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:19.090554    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:19.154069    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:19.154094    5795 retry.go:31] will retry after 861.35405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:20.015794    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:20.075719    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:20.075737    5795 retry.go:31] will retry after 2.735843225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:22.811901    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:22.870046    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:22.870067    5795 retry.go:31] will retry after 2.76926425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:25.639590    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:25.703596    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:25.703612    5795 retry.go:31] will retry after 5.892292577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:31.597963    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:31.656670    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:31.656700    5795 retry.go:31] will retry after 5.615919362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:37.272831    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:37.334393    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:37.334411    5795 retry.go:31] will retry after 8.851141694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:46.186855    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:46.248197    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:46.248217    5795 retry.go:31] will retry after 7.651990926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:53.901102    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:02:53.964263    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:02:53.964280    5795 retry.go:31] will retry after 25.815950565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:03:19.781069    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:03:19.844505    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:03:19.844523    5795 retry.go:31] will retry after 44.163059728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:04.008210    5795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 09:04:04.070405    5795 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:04.070432    5795 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-502000"
	I0216 09:04:04.093497    5795 out.go:177] * Verifying ingress addon...
	I0216 09:04:04.136380    5795 out.go:177] 
	W0216 09:04:04.157440    5795 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-502000" does not exist: client config: context "ingress-addon-legacy-502000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-502000" does not exist: client config: context "ingress-addon-legacy-502000" does not exist]
	W0216 09:04:04.157462    5795 out.go:239] * 
	* 
	W0216 09:04:04.167216    5795 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 09:04:04.188389    5795 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-502000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-502000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872",
	        "Created": "2024-02-16T16:58:01.1364207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:58:01.383558647Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hostname",
	        "HostsPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hosts",
	        "LogPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872-json.log",
	        "Name": "/ingress-addon-legacy-502000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-502000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-502000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-502000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-502000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-502000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc949f272d367c585ab260cd9736160741272d8647d2b9b216611f6facc43c44",
	            "SandboxKey": "/var/run/docker/netns/dc949f272d36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50601"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-502000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "49a412abaf9d",
	                        "ingress-addon-legacy-502000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "7bb5e423999aea44294ffb1938bc4ab424f88fe0d09dca472aa1334161f8d75d",
	                    "EndpointID": "7fa5cbc2a06bfbde8f04f07f56a410e058a6367f2b377a9166e237e945c9fb86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-502000",
	                        "49a412abaf9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000: exit status 6 (438.43892ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:04:04.777202    5851 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-502000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-502000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (109.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (83.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-502000 addons enable ingress-dns --alsologtostderr -v=5
E0216 09:04:27.292469    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-502000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m23.001081449s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:04:04.854393    5861 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:04:04.854729    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:04:04.854737    5861 out.go:304] Setting ErrFile to fd 2...
	I0216 09:04:04.854741    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:04:04.854922    5861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:04:04.855463    5861 mustload.go:65] Loading cluster: ingress-addon-legacy-502000
	I0216 09:04:04.855756    5861 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:04:04.855771    5861 addons.go:597] checking whether the cluster is paused
	I0216 09:04:04.855846    5861 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:04:04.855866    5861 host.go:66] Checking if "ingress-addon-legacy-502000" exists ...
	I0216 09:04:04.856312    5861 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 09:04:04.908912    5861 ssh_runner.go:195] Run: systemctl --version
	I0216 09:04:04.909018    5861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 09:04:04.960668    5861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 09:04:05.054861    5861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:04:05.094213    5861 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0216 09:04:05.115189    5861 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 09:04:05.115202    5861 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-502000"
	I0216 09:04:05.115210    5861 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-502000"
	I0216 09:04:05.115235    5861 host.go:66] Checking if "ingress-addon-legacy-502000" exists ...
	I0216 09:04:05.115536    5861 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
	I0216 09:04:05.189623    5861 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0216 09:04:05.211428    5861 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0216 09:04:05.234361    5861 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0216 09:04:05.234384    5861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0216 09:04:05.234497    5861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
	I0216 09:04:05.289470    5861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
	I0216 09:04:05.413373    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:05.482203    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:05.482250    5861 retry.go:31] will retry after 206.862727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:05.690414    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:05.765013    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:05.765051    5861 retry.go:31] will retry after 513.361284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:06.280176    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:06.344312    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:06.344336    5861 retry.go:31] will retry after 815.851892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:07.160742    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:07.219929    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:07.219951    5861 retry.go:31] will retry after 787.650856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:08.007724    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:08.068304    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:08.068337    5861 retry.go:31] will retry after 1.063022781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:09.131687    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:09.236444    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:09.236461    5861 retry.go:31] will retry after 1.434535449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:10.672884    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:10.730294    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:10.730312    5861 retry.go:31] will retry after 4.189037507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:14.919428    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:14.985238    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:14.985262    5861 retry.go:31] will retry after 5.173752717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:20.161726    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:20.220903    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:20.220930    5861 retry.go:31] will retry after 4.426498751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:24.647613    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:24.716459    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:24.716482    5861 retry.go:31] will retry after 10.044034701s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:34.761161    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:34.820808    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:34.820826    5861 retry.go:31] will retry after 7.501152521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:42.323813    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:04:42.391120    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:04:42.391139    5861 retry.go:31] will retry after 23.283794389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:05:05.677444    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:05:05.742819    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:05:05.742837    5861 retry.go:31] will retry after 21.894481034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:05:27.637507    5861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0216 09:05:27.703911    5861 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 09:05:27.727016    5861 out.go:177] 
	W0216 09:05:27.748097    5861 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0216 09:05:27.748111    5861 out.go:239] * 
	* 
	W0216 09:05:27.753063    5861 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 09:05:27.773775    5861 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-502000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-502000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872",
	        "Created": "2024-02-16T16:58:01.1364207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:58:01.383558647Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hostname",
	        "HostsPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hosts",
	        "LogPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872-json.log",
	        "Name": "/ingress-addon-legacy-502000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-502000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-502000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-502000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-502000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-502000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc949f272d367c585ab260cd9736160741272d8647d2b9b216611f6facc43c44",
	            "SandboxKey": "/var/run/docker/netns/dc949f272d36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50601"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-502000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "49a412abaf9d",
	                        "ingress-addon-legacy-502000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "7bb5e423999aea44294ffb1938bc4ab424f88fe0d09dca472aa1334161f8d75d",
	                    "EndpointID": "7fa5cbc2a06bfbde8f04f07f56a410e058a6367f2b377a9166e237e945c9fb86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-502000",
	                        "49a412abaf9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000: exit status 6 (431.659897ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:05:28.271461    5916 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-502000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-502000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (83.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-502000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-502000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872",
	        "Created": "2024-02-16T16:58:01.1364207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:58:01.383558647Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hostname",
	        "HostsPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/hosts",
	        "LogPath": "/var/lib/docker/containers/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872/49a412abaf9de9abfd3cbc7aa06f7bbe6fb1c5feeddc881296b7dcf90d7de872-json.log",
	        "Name": "/ingress-addon-legacy-502000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-502000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-502000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70f56a77ce562783734bd7f5db44271fe40d9826b16008ccf3faa73bc020340f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-502000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-502000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-502000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-502000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc949f272d367c585ab260cd9736160741272d8647d2b9b216611f6facc43c44",
	            "SandboxKey": "/var/run/docker/netns/dc949f272d36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50601"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-502000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "49a412abaf9d",
	                        "ingress-addon-legacy-502000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "7bb5e423999aea44294ffb1938bc4ab424f88fe0d09dca472aa1334161f8d75d",
	                    "EndpointID": "7fa5cbc2a06bfbde8f04f07f56a410e058a6367f2b377a9166e237e945c9fb86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-502000",
	                        "49a412abaf9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-502000 -n ingress-addon-legacy-502000: exit status 6 (422.555867ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:05:28.750955    5928 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-502000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-502000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                    
x
+
TestSkaffold (323.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe299572403 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe299572403 version: (1.738383898s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-539000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-539000 --memory=2600 --driver=docker : (21.903652311s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe299572403 run --minikube-profile skaffold-539000 --kube-context skaffold-539000 --status-check=true --port-forward=false --interactive=false
E0216 09:23:01.386285    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:23:59.597945    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:26:04.442125    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe299572403 run --minikube-profile skaffold-539000 --kube-context skaffold-539000 --status-check=true --port-forward=false --interactive=false: signal: killed (4m48.059351473s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-539000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 250B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for gcr.io/distroless/base:latest
	#2 DONE 2.4s
	
	#3 [internal] load .dockerignore
	#3 transferring context: 2B done
	#3 DONE 0.0s
	
	#4 [1/1] FROM gcr.io/distroless/base:latest@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base:latest@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.3s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.3s
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.8s done
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 0.9s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.1s
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.2s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.3s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.3s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.4s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.6s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.7s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.7s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 1.7s done
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.1s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 1.8s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.8s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 1.9s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.0s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.1s done
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.2s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.9s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 1.05MB / 5.85MB 2.7s
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 2.7s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 2.8s
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 2.8s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 3.15MB / 5.85MB 2.9s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0.0s done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.4s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 3.6s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/golang:1.18
	#3 DONE 1.0s
	
	#4 [internal] load .dockerignore
	#4 transferring context: 2B done
	#4 DONE 0.0s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 565B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 0.2s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 8.39MB / 10.88MB 0.4s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.4s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 0B / 54.58MB 0.4s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.4s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 5.24MB / 54.58MB 0.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 0B / 85.98MB 0.5s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 39.85MB / 54.58MB 0.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 26.53MB / 85.98MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 0.8s done
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 1.05MB / 141.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 27.26MB / 141.98MB 1.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 50.33MB / 141.98MB 1.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 88.08MB / 141.98MB 1.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 136.31MB / 141.98MB 1.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 1.6s done
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 1.7s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 1.7s done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 5.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 5.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 10.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 10.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 15.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 16.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 20.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 21.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 25.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 26.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 30.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 31.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 35.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 36.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 40.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 41.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 45.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 46.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 51.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 51.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 56.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 56.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 61.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 61.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 66.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 66.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 71.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 71.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 76.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 76.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 81.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 82.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 86.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 87.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 91.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 92.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 96.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 97.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 101.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 102.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 107.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 107.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 112.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 112.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 117.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 117.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 122.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 122.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 127.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 127.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 132.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 132.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 137.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 138.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 142.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 143.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 147.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 148.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 152.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 153.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 157.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 158.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 162.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 163.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 167.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 168.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 172.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 173.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 178.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 178.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 183.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 183.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 188.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 188.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 193.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 193.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 198.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 199.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 203.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 204.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 16.78MB / 55.03MB 204.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 32.51MB / 55.03MB 205.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 42.99MB / 55.03MB 205.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 38.80MB / 85.98MB 205.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 205.7s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 205.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 63.96MB / 85.98MB 206.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 78.64MB / 85.98MB 206.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 3.6s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.1s
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.5s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.6s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.2s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 9.1s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 227.1s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.1s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 21.7s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.0s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:55551d3185922e7146dc0e3754f1023c0fd96652ad914a4c8f942c02d25ca808 done
	#13 naming to docker.io/library/leeroy-web:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-web] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/golang:1.18
	#3 DONE 0.3s
	
	#4 [internal] load .dockerignore
	#4 transferring context: 2B done
	#4 DONE 0.0s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 430B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-16T09:21:21-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-16T09:21:21-08:00" level=error msg="Please run:"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="to obtain new credentials."
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
skaffold_test.go:107: error running skaffold: signal: killed

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-539000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 250B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for gcr.io/distroless/base:latest
	#2 DONE 2.4s
	
	#3 [internal] load .dockerignore
	#3 transferring context: 2B done
	#3 DONE 0.0s
	
	#4 [1/1] FROM gcr.io/distroless/base:latest@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base:latest@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.3s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.3s
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.8s done
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 0.9s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.1s
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.2s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.3s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.3s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.4s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.6s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.7s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.7s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 1.7s done
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.1s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 1.8s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.8s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 1.9s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.0s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.1s done
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.2s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.9s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 1.05MB / 5.85MB 2.7s
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 2.7s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 2.8s
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 2.8s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 3.15MB / 5.85MB 2.9s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0.0s done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.4s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 3.6s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/golang:1.18
	#3 DONE 1.0s
	
	#4 [internal] load .dockerignore
	#4 transferring context: 2B done
	#4 DONE 0.0s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 565B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 0.2s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 8.39MB / 10.88MB 0.4s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.4s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 0B / 54.58MB 0.4s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.4s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 5.24MB / 54.58MB 0.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 0B / 85.98MB 0.5s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 39.85MB / 54.58MB 0.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 26.53MB / 85.98MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 0.8s done
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 1.05MB / 141.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 27.26MB / 141.98MB 1.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 50.33MB / 141.98MB 1.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 88.08MB / 141.98MB 1.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 136.31MB / 141.98MB 1.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 1.6s done
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 1.7s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 1.7s done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 5.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 5.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 10.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 10.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 15.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 16.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 20.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 21.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 25.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 26.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 30.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 31.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 35.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 36.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 40.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 41.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 45.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 46.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 51.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 51.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 56.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 56.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 61.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 61.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 66.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 66.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 71.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 71.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 76.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 76.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 81.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 82.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 86.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 87.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 91.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 92.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 96.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 97.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 101.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 102.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 107.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 107.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 112.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 112.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 117.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 117.5s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 122.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 122.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 127.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 127.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 132.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 132.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 137.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 138.0s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 142.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 143.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 147.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 148.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 152.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 153.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 157.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 158.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 162.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 163.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 167.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 168.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 172.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 173.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 178.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 178.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 183.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 183.7s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 188.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 188.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 193.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 193.9s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 198.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 199.1s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 8.39MB / 55.03MB 203.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 30.41MB / 85.98MB 204.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 16.78MB / 55.03MB 204.8s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 32.51MB / 55.03MB 205.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 42.99MB / 55.03MB 205.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 38.80MB / 85.98MB 205.6s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 205.7s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 205.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 63.96MB / 85.98MB 206.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 78.64MB / 85.98MB 206.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 3.6s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.1s
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.5s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.6s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.2s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 9.1s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 227.1s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.1s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 21.7s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.0s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:55551d3185922e7146dc0e3754f1023c0fd96652ad914a4c8f942c02d25ca808 done
	#13 naming to docker.io/library/leeroy-web:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-web] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/golang:1.18
	#3 DONE 0.3s
	
	#4 [internal] load .dockerignore
	#4 transferring context: 2B done
	#4 DONE 0.0s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 430B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-16T09:21:21-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-16T09:21:21-08:00" level=error msg="Please run:"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="to obtain new credentials."
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-16T09:21:21-08:00" level=error
	time="2024-02-16T09:21:21-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2024-02-16 09:26:06.505027 -0800 PST m=+2690.207940684
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-539000
helpers_test.go:235: (dbg) docker inspect skaffold-539000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610",
	        "Created": "2024-02-16T17:21:00.779262174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 169976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:21:00.981077363Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610/hostname",
	        "HostsPath": "/var/lib/docker/containers/c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610/hosts",
	        "LogPath": "/var/lib/docker/containers/c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610/c12a9d9fcb4394aa6d8136e1a5973692a47269cb020c0b818960207fac8d5610-json.log",
	        "Name": "/skaffold-539000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "skaffold-539000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-539000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80e209b8a09dd1c5a8be81369e4a1aefb92b4a1ad10882ca91d4516447fddba2-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80e209b8a09dd1c5a8be81369e4a1aefb92b4a1ad10882ca91d4516447fddba2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80e209b8a09dd1c5a8be81369e4a1aefb92b4a1ad10882ca91d4516447fddba2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80e209b8a09dd1c5a8be81369e4a1aefb92b4a1ad10882ca91d4516447fddba2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "skaffold-539000",
	                "Source": "/var/lib/docker/volumes/skaffold-539000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-539000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-539000",
	                "name.minikube.sigs.k8s.io": "skaffold-539000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e775d949b82450cac1d434d451bcb225f3275d2af8a4b3d7e9676d2d5828c52d",
	            "SandboxKey": "/var/run/docker/netns/e775d949b824",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51568"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51569"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51570"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51567"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-539000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c12a9d9fcb43",
	                        "skaffold-539000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "ff0ade7eabef5e290cf8b0532118017813483e34e48f1fa7c9c179d02980ba8a",
	                    "EndpointID": "ff39cc3f07eac2d721c8d77f4241273283058e6d333d53cd34b36f6a36c742d2",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "skaffold-539000",
	                        "c12a9d9fcb43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-539000 -n skaffold-539000
helpers_test.go:244: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p skaffold-539000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p skaffold-539000 logs -n 25: (2.508561382s)
helpers_test.go:252: TestSkaffold logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile        |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	| start      | -p multinode-183000-m02        | multinode-183000-m02  | jenkins  | v1.32.0 | 16 Feb 24 09:15 PST |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| start      | -p multinode-183000-m03        | multinode-183000-m03  | jenkins  | v1.32.0 | 16 Feb 24 09:15 PST | 16 Feb 24 09:16 PST |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| node       | add -p multinode-183000        | multinode-183000      | jenkins  | v1.32.0 | 16 Feb 24 09:16 PST |                     |
	| delete     | -p multinode-183000-m03        | multinode-183000-m03  | jenkins  | v1.32.0 | 16 Feb 24 09:16 PST | 16 Feb 24 09:16 PST |
	| delete     | -p multinode-183000            | multinode-183000      | jenkins  | v1.32.0 | 16 Feb 24 09:16 PST | 16 Feb 24 09:16 PST |
	| start      | -p test-preload-987000         | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:16 PST | 16 Feb 24 09:17 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr              |                       |          |         |                     |                     |
	|            | --wait=true --preload=false    |                       |          |         |                     |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.4   |                       |          |         |                     |                     |
	| image      | test-preload-987000 image pull | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:17 PST | 16 Feb 24 09:17 PST |
	|            | gcr.io/k8s-minikube/busybox    |                       |          |         |                     |                     |
	| stop       | -p test-preload-987000         | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:17 PST | 16 Feb 24 09:18 PST |
	| start      | -p test-preload-987000         | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:18 PST | 16 Feb 24 09:19 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                       |          |         |                     |                     |
	|            | --wait=true --driver=docker    |                       |          |         |                     |                     |
	| image      | test-preload-987000 image list | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST | 16 Feb 24 09:19 PST |
	| delete     | -p test-preload-987000         | test-preload-987000   | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST | 16 Feb 24 09:19 PST |
	| start      | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST | 16 Feb 24 09:19 PST |
	|            | --memory=2048 --driver=docker  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:19 PST | 16 Feb 24 09:19 PST |
	|            | --cancel-scheduled             |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:20 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:20 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:20 PST | 16 Feb 24 09:20 PST |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| delete     | -p scheduled-stop-455000       | scheduled-stop-455000 | jenkins  | v1.32.0 | 16 Feb 24 09:20 PST | 16 Feb 24 09:20 PST |
	| start      | -p skaffold-539000             | skaffold-539000       | jenkins  | v1.32.0 | 16 Feb 24 09:20 PST | 16 Feb 24 09:21 PST |
	|            | --memory=2600 --driver=docker  |                       |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-539000       | skaffold | v1.32.0 | 16 Feb 24 09:21 PST | 16 Feb 24 09:21 PST |
	|            | skaffold-539000                |                       |          |         |                     |                     |
	|            | --user=skaffold                |                       |          |         |                     |                     |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 09:20:56
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 09:20:56.516036   10175 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:20:56.516223   10175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:20:56.516230   10175 out.go:304] Setting ErrFile to fd 2...
	I0216 09:20:56.516234   10175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:20:56.516411   10175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:20:56.517948   10175 out.go:298] Setting JSON to false
	I0216 09:20:56.540337   10175 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3027,"bootTime":1708101029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:20:56.540443   10175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:20:56.569433   10175 out.go:177] * [skaffold-539000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:20:56.611180   10175 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:20:56.611270   10175 notify.go:220] Checking for updates...
	I0216 09:20:56.668270   10175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:20:56.690265   10175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:20:56.712956   10175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:20:56.755075   10175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:20:56.776098   10175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:20:56.797231   10175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:20:56.852356   10175 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:20:56.852519   10175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:20:56.954587   10175 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-16 17:20:56.944564002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:20:56.998203   10175 out.go:177] * Using the docker driver based on user configuration
	I0216 09:20:57.019455   10175 start.go:299] selected driver: docker
	I0216 09:20:57.019468   10175 start.go:903] validating driver "docker" against <nil>
	I0216 09:20:57.019482   10175 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:20:57.023918   10175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:20:57.131656   10175 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-16 17:20:57.121802622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:20:57.131813   10175 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 09:20:57.132004   10175 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 09:20:57.153541   10175 out.go:177] * Using Docker Desktop driver with root privileges
	I0216 09:20:57.175535   10175 cni.go:84] Creating CNI manager for ""
	I0216 09:20:57.175562   10175 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:20:57.175580   10175 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 09:20:57.175600   10175 start_flags.go:323] config:
	{Name:skaffold-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-539000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:20:57.197349   10175 out.go:177] * Starting control plane node skaffold-539000 in cluster skaffold-539000
	I0216 09:20:57.218313   10175 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:20:57.241336   10175 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:20:57.283561   10175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 09:20:57.283614   10175 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 09:20:57.283618   10175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:20:57.283624   10175 cache.go:56] Caching tarball of preloaded images
	I0216 09:20:57.283775   10175 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:20:57.283788   10175 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 09:20:57.284962   10175 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/config.json ...
	I0216 09:20:57.285053   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/config.json: {Name:mk7e57c9f560147b805924e6f12e40bb85ddac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:20:57.334239   10175 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:20:57.334248   10175 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:20:57.334266   10175 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:20:57.334306   10175 start.go:365] acquiring machines lock for skaffold-539000: {Name:mkbcdc76985795b99836ef8b1f595b0175299ead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:20:57.334452   10175 start.go:369] acquired machines lock for "skaffold-539000" in 137.164µs
	I0216 09:20:57.334516   10175 start.go:93] Provisioning new machine with config: &{Name:skaffold-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-539000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 09:20:57.334821   10175 start.go:125] createHost starting for "" (driver="docker")
	I0216 09:20:57.378427   10175 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0216 09:20:57.378764   10175 start.go:159] libmachine.API.Create for "skaffold-539000" (driver="docker")
	I0216 09:20:57.378809   10175 client.go:168] LocalClient.Create starting
	I0216 09:20:57.378987   10175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem
	I0216 09:20:57.379074   10175 main.go:141] libmachine: Decoding PEM data...
	I0216 09:20:57.379102   10175 main.go:141] libmachine: Parsing certificate...
	I0216 09:20:57.379202   10175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem
	I0216 09:20:57.379269   10175 main.go:141] libmachine: Decoding PEM data...
	I0216 09:20:57.379281   10175 main.go:141] libmachine: Parsing certificate...
	I0216 09:20:57.380188   10175 cli_runner.go:164] Run: docker network inspect skaffold-539000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 09:20:57.430658   10175 cli_runner.go:211] docker network inspect skaffold-539000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 09:20:57.430763   10175 network_create.go:281] running [docker network inspect skaffold-539000] to gather additional debugging logs...
	I0216 09:20:57.430784   10175 cli_runner.go:164] Run: docker network inspect skaffold-539000
	W0216 09:20:57.479833   10175 cli_runner.go:211] docker network inspect skaffold-539000 returned with exit code 1
	I0216 09:20:57.479855   10175 network_create.go:284] error running [docker network inspect skaffold-539000]: docker network inspect skaffold-539000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network skaffold-539000 not found
	I0216 09:20:57.479865   10175 network_create.go:286] output of [docker network inspect skaffold-539000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network skaffold-539000 not found
	
	** /stderr **
	I0216 09:20:57.480005   10175 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 09:20:57.531932   10175 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:20:57.532304   10175 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022198c0}
	I0216 09:20:57.532318   10175 network_create.go:124] attempt to create docker network skaffold-539000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0216 09:20:57.532379   10175 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-539000 skaffold-539000
	W0216 09:20:57.581548   10175 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-539000 skaffold-539000 returned with exit code 1
	W0216 09:20:57.581578   10175 network_create.go:149] failed to create docker network skaffold-539000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-539000 skaffold-539000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0216 09:20:57.581591   10175 network_create.go:116] failed to create docker network skaffold-539000 192.168.58.0/24, will retry: subnet is taken
	I0216 09:20:57.583227   10175 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:20:57.583579   10175 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022748f0}
	I0216 09:20:57.583593   10175 network_create.go:124] attempt to create docker network skaffold-539000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0216 09:20:57.583656   10175 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-539000 skaffold-539000
	I0216 09:20:57.670073   10175 network_create.go:108] docker network skaffold-539000 192.168.67.0/24 created
	I0216 09:20:57.670104   10175 kic.go:121] calculated static IP "192.168.67.2" for the "skaffold-539000" container
	I0216 09:20:57.670211   10175 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 09:20:57.719382   10175 cli_runner.go:164] Run: docker volume create skaffold-539000 --label name.minikube.sigs.k8s.io=skaffold-539000 --label created_by.minikube.sigs.k8s.io=true
	I0216 09:20:57.770029   10175 oci.go:103] Successfully created a docker volume skaffold-539000
	I0216 09:20:57.770141   10175 cli_runner.go:164] Run: docker run --rm --name skaffold-539000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-539000 --entrypoint /usr/bin/test -v skaffold-539000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 09:20:58.262109   10175 oci.go:107] Successfully prepared a docker volume skaffold-539000
	I0216 09:20:58.262143   10175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 09:20:58.262153   10175 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 09:20:58.262237   10175 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-539000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 09:21:00.623780   10175 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-539000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.361527671s)
	I0216 09:21:00.623803   10175 kic.go:203] duration metric: took 2.361672 seconds to extract preloaded images to volume
	I0216 09:21:00.623915   10175 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 09:21:00.728168   10175 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-539000 --name skaffold-539000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-539000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-539000 --network skaffold-539000 --ip 192.168.67.2 --volume skaffold-539000:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 09:21:00.987995   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Running}}
	I0216 09:21:01.043949   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:01.102870   10175 cli_runner.go:164] Run: docker exec skaffold-539000 stat /var/lib/dpkg/alternatives/iptables
	I0216 09:21:01.275346   10175 oci.go:144] the created container "skaffold-539000" has a running status.
	I0216 09:21:01.275381   10175 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa...
	I0216 09:21:01.499453   10175 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 09:21:01.567899   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:01.622725   10175 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 09:21:01.622739   10175 kic_runner.go:114] Args: [docker exec --privileged skaffold-539000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 09:21:01.723579   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:01.775834   10175 machine.go:88] provisioning docker machine ...
	I0216 09:21:01.775874   10175 ubuntu.go:169] provisioning hostname "skaffold-539000"
	I0216 09:21:01.775975   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:01.828520   10175 main.go:141] libmachine: Using SSH client type: native
	I0216 09:21:01.828858   10175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 51568 <nil> <nil>}
	I0216 09:21:01.828870   10175 main.go:141] libmachine: About to run SSH command:
	sudo hostname skaffold-539000 && echo "skaffold-539000" | sudo tee /etc/hostname
	I0216 09:21:01.989316   10175 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-539000
	
	I0216 09:21:01.989417   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:02.041949   10175 main.go:141] libmachine: Using SSH client type: native
	I0216 09:21:02.042229   10175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 51568 <nil> <nil>}
	I0216 09:21:02.042240   10175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-539000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-539000/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-539000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:21:02.175723   10175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:21:02.175739   10175 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:21:02.175764   10175 ubuntu.go:177] setting up certificates
	I0216 09:21:02.175773   10175 provision.go:83] configureAuth start
	I0216 09:21:02.175843   10175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-539000
	I0216 09:21:02.227301   10175 provision.go:138] copyHostCerts
	I0216 09:21:02.227469   10175 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:21:02.227478   10175 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:21:02.227624   10175 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:21:02.227835   10175 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:21:02.227838   10175 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:21:02.227916   10175 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:21:02.228089   10175 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:21:02.228099   10175 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:21:02.228169   10175 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:21:02.228312   10175 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.skaffold-539000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-539000]
	I0216 09:21:02.536638   10175 provision.go:172] copyRemoteCerts
	I0216 09:21:02.536742   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:21:02.536794   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:02.588253   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:02.690855   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:21:02.730702   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0216 09:21:02.770831   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:21:02.811137   10175 provision.go:86] duration metric: configureAuth took 635.357446ms
	I0216 09:21:02.811148   10175 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:21:02.811293   10175 config.go:182] Loaded profile config "skaffold-539000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:21:02.811370   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:02.863587   10175 main.go:141] libmachine: Using SSH client type: native
	I0216 09:21:02.863879   10175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 51568 <nil> <nil>}
	I0216 09:21:02.863891   10175 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:21:02.999853   10175 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:21:02.999867   10175 ubuntu.go:71] root file system type: overlay
	I0216 09:21:02.999963   10175 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:21:03.000051   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:03.051447   10175 main.go:141] libmachine: Using SSH client type: native
	I0216 09:21:03.051742   10175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 51568 <nil> <nil>}
	I0216 09:21:03.051788   10175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:21:03.213423   10175 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:21:03.213538   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:03.265985   10175 main.go:141] libmachine: Using SSH client type: native
	I0216 09:21:03.266286   10175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 51568 <nil> <nil>}
	I0216 09:21:03.266298   10175 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:21:03.895287   10175 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:21:03.207610675 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 09:21:03.895305   10175 machine.go:91] provisioned docker machine in 2.119474132s
	I0216 09:21:03.895309   10175 client.go:171] LocalClient.Create took 6.516561804s
	I0216 09:21:03.895324   10175 start.go:167] duration metric: libmachine.API.Create for "skaffold-539000" took 6.51662926s
	I0216 09:21:03.895332   10175 start.go:300] post-start starting for "skaffold-539000" (driver="docker")
	I0216 09:21:03.895338   10175 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:21:03.895444   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:21:03.895506   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:03.948351   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:04.053454   10175 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:21:04.057505   10175 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:21:04.057525   10175 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:21:04.057535   10175 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:21:04.057539   10175 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:21:04.057547   10175 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:21:04.057652   10175 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:21:04.057834   10175 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:21:04.058036   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:21:04.072641   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:21:04.112671   10175 start.go:303] post-start completed in 217.323595ms
	I0216 09:21:04.113260   10175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-539000
	I0216 09:21:04.165088   10175 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/config.json ...
	I0216 09:21:04.165581   10175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:21:04.165651   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:04.217188   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:04.310233   10175 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:21:04.315000   10175 start.go:128] duration metric: createHost completed in 6.980229132s
	I0216 09:21:04.315010   10175 start.go:83] releasing machines lock for "skaffold-539000", held for 6.980622158s
	I0216 09:21:04.315105   10175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-539000
	I0216 09:21:04.365951   10175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:21:04.365953   10175 ssh_runner.go:195] Run: cat /version.json
	I0216 09:21:04.366022   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:04.366030   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:04.423018   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:04.423031   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:04.514212   10175 ssh_runner.go:195] Run: systemctl --version
	I0216 09:21:04.622292   10175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 09:21:04.627656   10175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 09:21:04.669190   10175 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 09:21:04.669272   10175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 09:21:04.713420   10175 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0216 09:21:04.713429   10175 start.go:475] detecting cgroup driver to use...
	I0216 09:21:04.713444   10175 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:21:04.713558   10175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:21:04.741197   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 09:21:04.756938   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:21:04.772800   10175 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:21:04.772860   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:21:04.788761   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:21:04.804442   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:21:04.820193   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:21:04.836140   10175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:21:04.853316   10175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:21:04.869537   10175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:21:04.884739   10175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:21:04.899787   10175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:21:04.962922   10175 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:21:05.050787   10175 start.go:475] detecting cgroup driver to use...
	I0216 09:21:05.050801   10175 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:21:05.050877   10175 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:21:05.078704   10175 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:21:05.078766   10175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:21:05.097579   10175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:21:05.128580   10175 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:21:05.133771   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:21:05.150122   10175 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:21:05.180073   10175 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:21:05.271484   10175 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:21:05.334539   10175 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:21:05.334612   10175 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:21:05.377912   10175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:21:05.439626   10175 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:21:05.702939   10175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 09:21:05.722909   10175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:21:05.741219   10175 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 09:21:05.806225   10175 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 09:21:05.871756   10175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:21:05.935584   10175 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 09:21:05.966185   10175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:21:05.983613   10175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:21:06.047073   10175 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 09:21:06.137736   10175 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 09:21:06.137852   10175 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 09:21:06.142651   10175 start.go:543] Will wait 60s for crictl version
	I0216 09:21:06.142704   10175 ssh_runner.go:195] Run: which crictl
	I0216 09:21:06.146720   10175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 09:21:06.199730   10175 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 09:21:06.199797   10175 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:21:06.222325   10175 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:21:06.291686   10175 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0216 09:21:06.291769   10175 cli_runner.go:164] Run: docker exec -t skaffold-539000 dig +short host.docker.internal
	I0216 09:21:06.392849   10175 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:21:06.392950   10175 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:21:06.397666   10175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:21:06.414594   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:06.467207   10175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 09:21:06.467271   10175 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:21:06.485792   10175 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 09:21:06.485804   10175 docker.go:615] Images already preloaded, skipping extraction
	I0216 09:21:06.485930   10175 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:21:06.504654   10175 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 09:21:06.504669   10175 cache_images.go:84] Images are preloaded, skipping loading
	I0216 09:21:06.504763   10175 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:21:06.551043   10175 cni.go:84] Creating CNI manager for ""
	I0216 09:21:06.551055   10175 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:21:06.551086   10175 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:21:06.551099   10175 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-539000 NodeName:skaffold-539000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 09:21:06.551212   10175 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "skaffold-539000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:21:06.551263   10175 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=skaffold-539000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:skaffold-539000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:21:06.551324   10175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0216 09:21:06.566880   10175 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:21:06.566945   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:21:06.581721   10175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0216 09:21:06.610313   10175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 09:21:06.640714   10175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0216 09:21:06.669787   10175 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:21:06.673973   10175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:21:06.691013   10175 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000 for IP: 192.168.67.2
	I0216 09:21:06.691031   10175 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:06.691241   10175 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:21:06.691363   10175 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:21:06.691425   10175 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.key
	I0216 09:21:06.691435   10175 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.crt with IP's: []
	I0216 09:21:06.781191   10175 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.crt ...
	I0216 09:21:06.781197   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.crt: {Name:mk2ab203691e2e374ece6badd89158e34bf648a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:06.781519   10175 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.key ...
	I0216 09:21:06.781524   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/client.key: {Name:mk61101f878c31528f15289445eb59e0059e46a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:06.781761   10175 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key.c7fa3a9e
	I0216 09:21:06.781775   10175 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 09:21:06.961272   10175 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt.c7fa3a9e ...
	I0216 09:21:06.961280   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt.c7fa3a9e: {Name:mk52ebbf6490914db0dbc07ac72651ca2b3a27fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:06.961559   10175 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key.c7fa3a9e ...
	I0216 09:21:06.961565   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key.c7fa3a9e: {Name:mk86701456de74a5c1b261e51292a0e18e4189af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:06.961783   10175 certs.go:337] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt
	I0216 09:21:06.961964   10175 certs.go:341] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key
	I0216 09:21:06.962128   10175 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.key
	I0216 09:21:06.962142   10175 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.crt with IP's: []
	I0216 09:21:07.053227   10175 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.crt ...
	I0216 09:21:07.053232   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.crt: {Name:mk629029f48b01b30fc746fe0cdd6753b68bdc9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:07.053482   10175 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.key ...
	I0216 09:21:07.053488   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.key: {Name:mk815c874e19eddf28469b3e4ddfd5c19c8b78a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:07.053875   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:21:07.053927   10175 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:21:07.053936   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:21:07.053967   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:21:07.053992   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:21:07.054021   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:21:07.054080   10175 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:21:07.054663   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:21:07.096940   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 09:21:07.136950   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:21:07.176742   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/skaffold-539000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 09:21:07.217553   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:21:07.260003   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:21:07.300724   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:21:07.341686   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:21:07.382846   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:21:07.422771   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:21:07.462921   10175 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:21:07.503632   10175 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:21:07.533125   10175 ssh_runner.go:195] Run: openssl version
	I0216 09:21:07.539142   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:21:07.554802   10175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:21:07.559172   10175 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:21:07.559248   10175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:21:07.566106   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:21:07.581978   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:21:07.597584   10175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:21:07.602131   10175 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:21:07.602191   10175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:21:07.609696   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:21:07.625936   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:21:07.643870   10175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:21:07.648515   10175 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:21:07.648557   10175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:21:07.655088   10175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:21:07.671817   10175 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:21:07.676022   10175 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 09:21:07.676096   10175 kubeadm.go:404] StartCluster: {Name:skaffold-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-539000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:21:07.676188   10175 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:21:07.692890   10175 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:21:07.707864   10175 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:21:07.722864   10175 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:21:07.722978   10175 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:21:07.738182   10175 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:21:07.738220   10175 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:21:07.785917   10175 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 09:21:07.786028   10175 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:21:07.908455   10175 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:21:07.908567   10175 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:21:07.908638   10175 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:21:08.201500   10175 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:21:08.244034   10175 out.go:204]   - Generating certificates and keys ...
	I0216 09:21:08.244087   10175 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:21:08.244183   10175 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:21:08.344162   10175 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 09:21:08.518684   10175 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 09:21:08.630983   10175 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 09:21:08.746667   10175 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 09:21:08.777963   10175 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 09:21:08.778112   10175 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-539000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 09:21:08.887161   10175 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 09:21:08.887321   10175 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-539000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 09:21:09.018932   10175 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 09:21:09.216344   10175 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 09:21:09.448837   10175 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 09:21:09.448926   10175 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:21:09.573233   10175 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:21:09.650526   10175 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:21:09.736876   10175 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:21:09.945078   10175 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:21:09.945467   10175 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:21:09.947336   10175 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:21:09.968948   10175 out.go:204]   - Booting up control plane ...
	I0216 09:21:09.969025   10175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:21:09.969096   10175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:21:09.969147   10175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:21:09.969218   10175 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:21:09.969284   10175 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:21:09.969322   10175 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 09:21:10.040729   10175 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:21:15.044915   10175 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.004016 seconds
	I0216 09:21:15.045075   10175 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 09:21:15.055985   10175 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 09:21:15.571136   10175 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 09:21:15.571304   10175 kubeadm.go:322] [mark-control-plane] Marking the node skaffold-539000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 09:21:16.079985   10175 kubeadm.go:322] [bootstrap-token] Using token: ukxy7m.7nqjet3zzb12t53r
	I0216 09:21:16.102725   10175 out.go:204]   - Configuring RBAC rules ...
	I0216 09:21:16.102821   10175 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 09:21:16.143841   10175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 09:21:16.148460   10175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 09:21:16.150550   10175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 09:21:16.152925   10175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 09:21:16.155068   10175 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 09:21:16.164409   10175 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 09:21:16.281509   10175 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 09:21:16.570531   10175 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 09:21:16.571355   10175 kubeadm.go:322] 
	I0216 09:21:16.571442   10175 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 09:21:16.571450   10175 kubeadm.go:322] 
	I0216 09:21:16.571551   10175 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 09:21:16.571556   10175 kubeadm.go:322] 
	I0216 09:21:16.571586   10175 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 09:21:16.571630   10175 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 09:21:16.571666   10175 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 09:21:16.571668   10175 kubeadm.go:322] 
	I0216 09:21:16.571704   10175 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 09:21:16.571707   10175 kubeadm.go:322] 
	I0216 09:21:16.571751   10175 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 09:21:16.571754   10175 kubeadm.go:322] 
	I0216 09:21:16.571825   10175 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 09:21:16.571910   10175 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 09:21:16.571988   10175 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 09:21:16.572022   10175 kubeadm.go:322] 
	I0216 09:21:16.572148   10175 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 09:21:16.572252   10175 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 09:21:16.572260   10175 kubeadm.go:322] 
	I0216 09:21:16.572373   10175 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ukxy7m.7nqjet3zzb12t53r \
	I0216 09:21:16.572543   10175 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f04862da0f135f2f63db76a0e7e00284dbb48f603bb98f1797713392a7cbadc1 \
	I0216 09:21:16.572571   10175 kubeadm.go:322] 	--control-plane 
	I0216 09:21:16.572595   10175 kubeadm.go:322] 
	I0216 09:21:16.572714   10175 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 09:21:16.572729   10175 kubeadm.go:322] 
	I0216 09:21:16.572888   10175 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ukxy7m.7nqjet3zzb12t53r \
	I0216 09:21:16.572982   10175 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f04862da0f135f2f63db76a0e7e00284dbb48f603bb98f1797713392a7cbadc1 
	I0216 09:21:16.580422   10175 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0216 09:21:16.580548   10175 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:21:16.580560   10175 cni.go:84] Creating CNI manager for ""
	I0216 09:21:16.580574   10175 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:21:16.600532   10175 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 09:21:16.676428   10175 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 09:21:16.697045   10175 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 09:21:16.727100   10175 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 09:21:16.727195   10175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 09:21:16.727198   10175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=skaffold-539000 minikube.k8s.io/updated_at=2024_02_16T09_21_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 09:21:16.888522   10175 ops.go:34] apiserver oom_adj: -16
	I0216 09:21:16.888557   10175 kubeadm.go:1088] duration metric: took 161.426559ms to wait for elevateKubeSystemPrivileges.
	I0216 09:21:16.888567   10175 kubeadm.go:406] StartCluster complete in 9.212565928s
	I0216 09:21:16.888581   10175 settings.go:142] acquiring lock: {Name:mk797212e07e7fce370dcd397d90efd277229019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:16.888666   10175 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:21:16.889205   10175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:21:16.889485   10175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 09:21:16.889518   10175 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 09:21:16.889566   10175 addons.go:69] Setting storage-provisioner=true in profile "skaffold-539000"
	I0216 09:21:16.889581   10175 addons.go:234] Setting addon storage-provisioner=true in "skaffold-539000"
	I0216 09:21:16.889583   10175 addons.go:69] Setting default-storageclass=true in profile "skaffold-539000"
	I0216 09:21:16.889607   10175 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-539000"
	I0216 09:21:16.889618   10175 config.go:182] Loaded profile config "skaffold-539000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:21:16.889629   10175 host.go:66] Checking if "skaffold-539000" exists ...
	I0216 09:21:16.889935   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:16.889992   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:16.976175   10175 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:21:16.954025   10175 addons.go:234] Setting addon default-storageclass=true in "skaffold-539000"
	I0216 09:21:16.976207   10175 host.go:66] Checking if "skaffold-539000" exists ...
	I0216 09:21:16.980161   10175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 09:21:16.996142   10175 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 09:21:16.996149   10175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 09:21:16.996221   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:16.997071   10175 cli_runner.go:164] Run: docker container inspect skaffold-539000 --format={{.State.Status}}
	I0216 09:21:17.063818   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:17.065551   10175 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 09:21:17.065560   10175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 09:21:17.065655   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:17.130843   10175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51568 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/skaffold-539000/id_rsa Username:docker}
	I0216 09:21:17.241118   10175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 09:21:17.294688   10175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 09:21:17.398034   10175 kapi.go:248] "coredns" deployment in "kube-system" namespace and "skaffold-539000" context rescaled to 1 replicas
	I0216 09:21:17.398054   10175 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 09:21:17.419854   10175 out.go:177] * Verifying Kubernetes components...
	I0216 09:21:17.441603   10175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:21:17.898346   10175 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0216 09:21:18.062502   10175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-539000
	I0216 09:21:18.093115   10175 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0216 09:21:18.151970   10175 addons.go:505] enable addons completed in 1.262467527s: enabled=[storage-provisioner default-storageclass]
	I0216 09:21:18.158321   10175 api_server.go:52] waiting for apiserver process to appear ...
	I0216 09:21:18.158369   10175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:21:18.175446   10175 api_server.go:72] duration metric: took 777.377066ms to wait for apiserver process to appear ...
	I0216 09:21:18.175454   10175 api_server.go:88] waiting for apiserver healthz status ...
	I0216 09:21:18.175479   10175 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51567/healthz ...
	I0216 09:21:18.181071   10175 api_server.go:279] https://127.0.0.1:51567/healthz returned 200:
	ok
	I0216 09:21:18.182575   10175 api_server.go:141] control plane version: v1.28.4
	I0216 09:21:18.182587   10175 api_server.go:131] duration metric: took 7.130448ms to wait for apiserver health ...
	I0216 09:21:18.182595   10175 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 09:21:18.188226   10175 system_pods.go:59] 5 kube-system pods found
	I0216 09:21:18.188236   10175 system_pods.go:61] "etcd-skaffold-539000" [61811ad4-3623-4f12-8e19-0d1b048a1707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 09:21:18.188245   10175 system_pods.go:61] "kube-apiserver-skaffold-539000" [59ea69d5-6a8c-4753-91cf-1e5b385eca79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 09:21:18.188251   10175 system_pods.go:61] "kube-controller-manager-skaffold-539000" [470f821c-c7f3-4829-8861-111a1eb34867] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 09:21:18.188257   10175 system_pods.go:61] "kube-scheduler-skaffold-539000" [a3dd0769-06be-4d05-b568-2297e340fae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 09:21:18.188262   10175 system_pods.go:61] "storage-provisioner" [af6ac183-ad72-4832-84c8-f4a39f054f34] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0216 09:21:18.188267   10175 system_pods.go:74] duration metric: took 5.669011ms to wait for pod list to return data ...
	I0216 09:21:18.188271   10175 kubeadm.go:581] duration metric: took 790.206801ms to wait for : map[apiserver:true system_pods:true] ...
	I0216 09:21:18.188279   10175 node_conditions.go:102] verifying NodePressure condition ...
	I0216 09:21:18.191196   10175 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 09:21:18.191207   10175 node_conditions.go:123] node cpu capacity is 12
	I0216 09:21:18.191219   10175 node_conditions.go:105] duration metric: took 2.938327ms to run NodePressure ...
	I0216 09:21:18.191225   10175 start.go:228] waiting for startup goroutines ...
	I0216 09:21:18.191229   10175 start.go:233] waiting for cluster config update ...
	I0216 09:21:18.191238   10175 start.go:242] writing updated cluster config ...
	I0216 09:21:18.229176   10175 ssh_runner.go:195] Run: rm -f paused
	I0216 09:21:18.273327   10175 start.go:601] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0216 09:21:18.294827   10175 out.go:177] * Done! kubectl is now configured to use "skaffold-539000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:21:05 skaffold-539000 dockerd[1086]: time="2024-02-16T17:21:05.676620275Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:21:05 skaffold-539000 dockerd[1086]: time="2024-02-16T17:21:05.698119859Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:21:05 skaffold-539000 dockerd[1086]: time="2024-02-16T17:21:05.698334219Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:21:05 skaffold-539000 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:21:06 skaffold-539000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Start docker client with request timeout 0s"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Loaded network plugin cni"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Docker Info: &{ID:85152f80-e8a6-4971-a8f6-ae6ffe579a1b Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2024-02-16T17:21:06.126605566Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.6.12-linuxkit OperatingSystem:Ubuntu 22.04.3
LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00042a7e0 NCPU:12 MemTotal:6213300224 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:skaffold-539000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: DefaultAddres
sPools:[] Warnings:[]}"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 16 17:21:06 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:06Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 16 17:21:06 skaffold-539000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 16 17:21:11 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6cab60890b2540617eb306ec2de02c06b09713370d02a3c0b8b28fd9d4b776bc/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:11 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e0a9256c0c3912c7f2298d7d53c89a97f65c40b57fd6571af551ae825f66e366/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:11 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58c9d6422c79ecbf0f028b91b75ff2baec1ba10b090eecd560865b10885e4eb5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:11 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd856d5710e7ad5141835a19a1fb8b015b20936eb06b22babe101acdd83e4df8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:29 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2517b2bba0cf610dada6a8e19042d25bffbc7001b238d98394550739f4cea1af/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:29 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ff126ac17ea4eae5bf5029f7a15abb5e8c4f78ef5e83f8035e27626da909a46/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:30 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3878c89dd39a5b88db33b68374624dd27be2650620246d2ee8c382c649b577d0/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:21:37 skaffold-539000 cri-dockerd[1313]: time="2024-02-16T17:21:37Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Feb 16 17:21:59 skaffold-539000 dockerd[1086]: time="2024-02-16T17:21:59.283164621Z" level=info msg="ignoring event" container=011dbeabaf6794109ef2f86b7aecd66829af3c273c2cd6260442f7eaca05ada8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e10884c08029       6e38f40d628db       4 minutes ago       Running             storage-provisioner       1                   2517b2bba0cf6       storage-provisioner
	1437c0ded2c0f       ead0a4a53df89       4 minutes ago       Running             coredns                   0                   3878c89dd39a5       coredns-5dd5756b68-ps9rn
	f60387f14d477       83f6cc407eed8       4 minutes ago       Running             kube-proxy                0                   4ff126ac17ea4       kube-proxy-m9ns4
	011dbeabaf679       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   2517b2bba0cf6       storage-provisioner
	453f6fca776e3       d058aa5ab969c       4 minutes ago       Running             kube-controller-manager   0                   e0a9256c0c391       kube-controller-manager-skaffold-539000
	d0bf43687a802       73deb9a3f7025       4 minutes ago       Running             etcd                      0                   fd856d5710e7a       etcd-skaffold-539000
	053e09e15cb2f       7fe0e6f37db33       4 minutes ago       Running             kube-apiserver            0                   58c9d6422c79e       kube-apiserver-skaffold-539000
	9dd69c67308da       e3db313c6dbc0       4 minutes ago       Running             kube-scheduler            0                   6cab60890b254       kube-scheduler-skaffold-539000
	
	
	==> coredns [1437c0ded2c0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57109 - 20951 "HINFO IN 3265568832514174863.891857213361693231. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008044959s
	
	
	==> describe nodes <==
	Name:               skaffold-539000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-539000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9
	                    minikube.k8s.io/name=skaffold-539000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_16T09_21_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Feb 2024 17:21:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-539000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Feb 2024 17:26:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Feb 2024 17:25:52 +0000   Fri, 16 Feb 2024 17:21:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Feb 2024 17:25:52 +0000   Fri, 16 Feb 2024 17:21:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Feb 2024 17:25:52 +0000   Fri, 16 Feb 2024 17:21:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Feb 2024 17:25:52 +0000   Fri, 16 Feb 2024 17:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    skaffold-539000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	System Info:
	  Machine ID:                 18d0fb09b25a49108b851501e5647d15
	  System UUID:                18d0fb09b25a49108b851501e5647d15
	  Boot ID:                    2fdb4e59-5394-4b60-90d3-5bb0e84fcd74
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ps9rn                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     4m39s
	  kube-system                 etcd-skaffold-539000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-apiserver-skaffold-539000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-skaffold-539000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-m9ns4                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-skaffold-539000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (6%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node skaffold-539000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node skaffold-539000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node skaffold-539000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s                  kubelet          Node skaffold-539000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s                  kubelet          Node skaffold-539000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s                  kubelet          Node skaffold-539000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m52s                  kubelet          Node skaffold-539000 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m52s                  kubelet          Node skaffold-539000 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                  node-controller  Node skaffold-539000 event: Registered Node skaffold-539000 in Controller
	
	
	==> dmesg <==
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.003137] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.002754] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.004771] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.004757] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.004892] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.001865] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.004346] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.000559] virtio-pci 0000:00:0f.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0f.0: PCI INT A: no GSI
	[  +0.000471] virtio-pci 0000:00:10.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:10.0: PCI INT A: no GSI
	[  +0.010206] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.024594] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.205443] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.036666] fakeowner: loading out-of-tree module taints kernel.
	[  +0.003047] netlink: 'init': attribute type 22 has an invalid length.
	[Feb16 16:42] systemd[1331]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [d0bf43687a80] <==
	{"level":"info","ts":"2024-02-16T17:21:11.565404Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:21:11.565421Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:21:11.565702Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-16T17:21:11.565744Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-16T17:21:12.191752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-16T17:21:12.191817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-16T17:21:12.191839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-02-16T17:21:12.19185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-02-16T17:21:12.191881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-16T17:21:12.191887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-02-16T17:21:12.191892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-16T17:21:12.192945Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:21:12.193632Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:skaffold-539000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:21:12.193705Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:21:12.19386Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:21:12.194051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:21:12.194097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:21:12.194115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:21:12.194163Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:21:12.194181Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:21:12.194951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:21:12.19501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-16T17:25:06.733325Z","caller":"traceutil/trace.go:171","msg":"trace[1309638872] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"204.800763ms","start":"2024-02-16T17:25:06.528512Z","end":"2024-02-16T17:25:06.733313Z","steps":["trace[1309638872] 'process raft request'  (duration: 204.739683ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-16T17:25:10.986166Z","caller":"traceutil/trace.go:171","msg":"trace[1408528416] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"242.689265ms","start":"2024-02-16T17:25:10.743426Z","end":"2024-02-16T17:25:10.986115Z","steps":["trace[1408528416] 'process raft request'  (duration: 242.605836ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-16T17:25:59.177388Z","caller":"traceutil/trace.go:171","msg":"trace[1149847207] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"101.681579ms","start":"2024-02-16T17:25:59.07569Z","end":"2024-02-16T17:25:59.177371Z","steps":["trace[1149847207] 'process raft request'  (duration: 101.466338ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:26:08 up 45 min,  0 users,  load average: 11.04, 6.11, 4.63
	Linux skaffold-539000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [053e09e15cb2] <==
	I0216 17:21:13.568243       1 controller.go:624] quota admission added evaluator for: namespaces
	I0216 17:21:13.571620       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0216 17:21:13.572589       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0216 17:21:13.576480       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0216 17:21:13.663111       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0216 17:21:13.663368       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0216 17:21:13.663397       1 aggregator.go:166] initial CRD sync complete...
	I0216 17:21:13.663546       1 autoregister_controller.go:141] Starting autoregister controller
	I0216 17:21:13.663570       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0216 17:21:13.663588       1 cache.go:39] Caches are synced for autoregister controller
	I0216 17:21:14.438197       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0216 17:21:14.441693       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0216 17:21:14.441730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0216 17:21:14.728350       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0216 17:21:14.753123       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0216 17:21:14.874751       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0216 17:21:14.879114       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0216 17:21:14.879793       1 controller.go:624] quota admission added evaluator for: endpoints
	I0216 17:21:14.882486       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0216 17:21:15.491989       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0216 17:21:16.272997       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0216 17:21:16.280232       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0216 17:21:16.287015       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0216 17:21:29.349830       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0216 17:21:29.449656       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [453f6fca776e] <==
	I0216 17:21:28.618034       1 shared_informer.go:318] Caches are synced for ephemeral
	I0216 17:21:28.623381       1 shared_informer.go:318] Caches are synced for persistent volume
	I0216 17:21:28.628631       1 shared_informer.go:318] Caches are synced for expand
	I0216 17:21:28.638185       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0216 17:21:28.648014       1 shared_informer.go:318] Caches are synced for endpoint
	I0216 17:21:28.648092       1 shared_informer.go:318] Caches are synced for deployment
	I0216 17:21:28.650280       1 shared_informer.go:318] Caches are synced for job
	I0216 17:21:28.661163       1 shared_informer.go:318] Caches are synced for resource quota
	I0216 17:21:28.677327       1 shared_informer.go:318] Caches are synced for disruption
	I0216 17:21:28.699471       1 shared_informer.go:318] Caches are synced for PVC protection
	I0216 17:21:28.707289       1 shared_informer.go:318] Caches are synced for resource quota
	I0216 17:21:28.749610       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0216 17:21:29.083381       1 shared_informer.go:318] Caches are synced for garbage collector
	I0216 17:21:29.097111       1 shared_informer.go:318] Caches are synced for garbage collector
	I0216 17:21:29.097163       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0216 17:21:29.355742       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m9ns4"
	I0216 17:21:29.452196       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I0216 17:21:29.552247       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ps9rn"
	I0216 17:21:29.558094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.138069ms"
	I0216 17:21:29.563459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.135756ms"
	I0216 17:21:29.563636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.993µs"
	I0216 17:21:29.569480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.613µs"
	I0216 17:21:30.679663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.345µs"
	I0216 17:21:30.696921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.671186ms"
	I0216 17:21:30.697669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="265.471µs"
	
	
	==> kube-proxy [f60387f14d47] <==
	I0216 17:21:29.988713       1 server_others.go:69] "Using iptables proxy"
	I0216 17:21:29.996886       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0216 17:21:30.067602       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0216 17:21:30.070439       1 server_others.go:152] "Using iptables Proxier"
	I0216 17:21:30.070518       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0216 17:21:30.070524       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0216 17:21:30.070542       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0216 17:21:30.070836       1 server.go:846] "Version info" version="v1.28.4"
	I0216 17:21:30.070870       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:21:30.071672       1 config.go:97] "Starting endpoint slice config controller"
	I0216 17:21:30.071731       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0216 17:21:30.071858       1 config.go:188] "Starting service config controller"
	I0216 17:21:30.072139       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0216 17:21:30.071859       1 config.go:315] "Starting node config controller"
	I0216 17:21:30.072418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0216 17:21:30.172321       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0216 17:21:30.173294       1 shared_informer.go:318] Caches are synced for service config
	I0216 17:21:30.174845       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9dd69c67308d] <==
	W0216 17:21:13.578968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0216 17:21:13.579055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0216 17:21:13.579131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0216 17:21:13.579150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0216 17:21:13.579202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0216 17:21:13.579242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0216 17:21:13.579459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0216 17:21:13.579544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0216 17:21:13.579776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0216 17:21:13.579798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0216 17:21:13.579888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0216 17:21:13.579942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0216 17:21:13.580113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0216 17:21:13.580154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0216 17:21:13.580168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0216 17:21:13.580213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0216 17:21:14.467426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0216 17:21:14.467468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0216 17:21:14.495673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0216 17:21:14.495738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0216 17:21:14.503678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0216 17:21:14.503745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0216 17:21:14.528600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0216 17:21:14.528645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0216 17:21:14.977053       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.362824    2473 apiserver.go:52] "Watching apiserver"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.380672    2473 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: E0216 17:21:17.678258    2473 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-skaffold-539000\" already exists" pod="kube-system/etcd-skaffold-539000"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: E0216 17:21:17.678909    2473 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-skaffold-539000\" already exists" pod="kube-system/kube-apiserver-skaffold-539000"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.770268    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-539000" podStartSLOduration=1.7702070600000002 podCreationTimestamp="2024-02-16 17:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:17.769970705 +0000 UTC m=+1.515165095" watchObservedRunningTime="2024-02-16 17:21:17.77020706 +0000 UTC m=+1.515401443"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.770508    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-539000" podStartSLOduration=1.770392661 podCreationTimestamp="2024-02-16 17:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:17.684788722 +0000 UTC m=+1.429983109" watchObservedRunningTime="2024-02-16 17:21:17.770392661 +0000 UTC m=+1.515587050"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.781196    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-skaffold-539000" podStartSLOduration=1.781152256 podCreationTimestamp="2024-02-16 17:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:17.78087426 +0000 UTC m=+1.526068647" watchObservedRunningTime="2024-02-16 17:21:17.781152256 +0000 UTC m=+1.526346636"
	Feb 16 17:21:17 skaffold-539000 kubelet[2473]: I0216 17:21:17.790864    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-539000" podStartSLOduration=1.79081316 podCreationTimestamp="2024-02-16 17:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:17.790742949 +0000 UTC m=+1.535937336" watchObservedRunningTime="2024-02-16 17:21:17.79081316 +0000 UTC m=+1.536007539"
	Feb 16 17:21:28 skaffold-539000 kubelet[2473]: I0216 17:21:28.779572    2473 topology_manager.go:215] "Topology Admit Handler" podUID="af6ac183-ad72-4832-84c8-f4a39f054f34" podNamespace="kube-system" podName="storage-provisioner"
	Feb 16 17:21:28 skaffold-539000 kubelet[2473]: I0216 17:21:28.908255    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmk5\" (UniqueName: \"kubernetes.io/projected/af6ac183-ad72-4832-84c8-f4a39f054f34-kube-api-access-jjmk5\") pod \"storage-provisioner\" (UID: \"af6ac183-ad72-4832-84c8-f4a39f054f34\") " pod="kube-system/storage-provisioner"
	Feb 16 17:21:28 skaffold-539000 kubelet[2473]: I0216 17:21:28.908311    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af6ac183-ad72-4832-84c8-f4a39f054f34-tmp\") pod \"storage-provisioner\" (UID: \"af6ac183-ad72-4832-84c8-f4a39f054f34\") " pod="kube-system/storage-provisioner"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.359619    2473 topology_manager.go:215] "Topology Admit Handler" podUID="2e1be4f8-a473-4261-9adb-61056410505a" podNamespace="kube-system" podName="kube-proxy-m9ns4"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.416898    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e1be4f8-a473-4261-9adb-61056410505a-xtables-lock\") pod \"kube-proxy-m9ns4\" (UID: \"2e1be4f8-a473-4261-9adb-61056410505a\") " pod="kube-system/kube-proxy-m9ns4"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.417125    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e1be4f8-a473-4261-9adb-61056410505a-kube-proxy\") pod \"kube-proxy-m9ns4\" (UID: \"2e1be4f8-a473-4261-9adb-61056410505a\") " pod="kube-system/kube-proxy-m9ns4"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.417214    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e1be4f8-a473-4261-9adb-61056410505a-lib-modules\") pod \"kube-proxy-m9ns4\" (UID: \"2e1be4f8-a473-4261-9adb-61056410505a\") " pod="kube-system/kube-proxy-m9ns4"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.417257    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxqh\" (UniqueName: \"kubernetes.io/projected/2e1be4f8-a473-4261-9adb-61056410505a-kube-api-access-bxxqh\") pod \"kube-proxy-m9ns4\" (UID: \"2e1be4f8-a473-4261-9adb-61056410505a\") " pod="kube-system/kube-proxy-m9ns4"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.556308    2473 topology_manager.go:215] "Topology Admit Handler" podUID="31d0f8dc-55ff-4bef-af31-418c11617b76" podNamespace="kube-system" podName="coredns-5dd5756b68-ps9rn"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.619587    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31d0f8dc-55ff-4bef-af31-418c11617b76-config-volume\") pod \"coredns-5dd5756b68-ps9rn\" (UID: \"31d0f8dc-55ff-4bef-af31-418c11617b76\") " pod="kube-system/coredns-5dd5756b68-ps9rn"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.619642    2473 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpfjx\" (UniqueName: \"kubernetes.io/projected/31d0f8dc-55ff-4bef-af31-418c11617b76-kube-api-access-xpfjx\") pod \"coredns-5dd5756b68-ps9rn\" (UID: \"31d0f8dc-55ff-4bef-af31-418c11617b76\") " pod="kube-system/coredns-5dd5756b68-ps9rn"
	Feb 16 17:21:29 skaffold-539000 kubelet[2473]: I0216 17:21:29.654476    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.654447992 podCreationTimestamp="2024-02-16 17:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:29.654296244 +0000 UTC m=+13.399861257" watchObservedRunningTime="2024-02-16 17:21:29.654447992 +0000 UTC m=+13.400013005"
	Feb 16 17:21:30 skaffold-539000 kubelet[2473]: I0216 17:21:30.668462    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m9ns4" podStartSLOduration=1.668435662 podCreationTimestamp="2024-02-16 17:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:30.668417698 +0000 UTC m=+14.413982716" watchObservedRunningTime="2024-02-16 17:21:30.668435662 +0000 UTC m=+14.414000675"
	Feb 16 17:21:30 skaffold-539000 kubelet[2473]: I0216 17:21:30.679774    2473 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ps9rn" podStartSLOduration=1.6797412280000001 podCreationTimestamp="2024-02-16 17:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:21:30.679084795 +0000 UTC m=+14.424649815" watchObservedRunningTime="2024-02-16 17:21:30.679741228 +0000 UTC m=+14.425306249"
	Feb 16 17:21:37 skaffold-539000 kubelet[2473]: I0216 17:21:37.031260    2473 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 16 17:21:37 skaffold-539000 kubelet[2473]: I0216 17:21:37.032007    2473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 16 17:21:59 skaffold-539000 kubelet[2473]: I0216 17:21:59.825951    2473 scope.go:117] "RemoveContainer" containerID="011dbeabaf6794109ef2f86b7aecd66829af3c273c2cd6260442f7eaca05ada8"
	
	
	==> storage-provisioner [011dbeabaf67] <==
	I0216 17:21:29.268726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0216 17:21:59.272260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7e10884c0802] <==
	I0216 17:21:59.924062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0216 17:21:59.931913       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0216 17:21:59.931961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0216 17:21:59.939717       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0216 17:21:59.939859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_skaffold-539000_7d64215f-c1c0-4682-9b6b-2f02c943d149!
	I0216 17:21:59.940747       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a454328-37b7-4c76-9563-e15ff1d77500", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' skaffold-539000_7d64215f-c1c0-4682-9b6b-2f02c943d149 became leader
	I0216 17:22:00.040073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_skaffold-539000_7d64215f-c1c0-4682-9b6b-2f02c943d149!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p skaffold-539000 -n skaffold-539000
helpers_test.go:261: (dbg) Run:  kubectl --context skaffold-539000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestSkaffold FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "skaffold-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-539000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-539000: (2.989839776s)
--- FAIL: TestSkaffold (323.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (578.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0216 09:32:02.652321    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m18.201269393s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-089000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-089000 in cluster kubernetes-upgrade-089000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:31:46.456225   12231 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:31:46.456422   12231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:31:46.456427   12231 out.go:304] Setting ErrFile to fd 2...
	I0216 09:31:46.456431   12231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:31:46.456617   12231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:31:46.458135   12231 out.go:298] Setting JSON to false
	I0216 09:31:46.488927   12231 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3677,"bootTime":1708101029,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:31:46.489100   12231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:31:46.511612   12231 out.go:177] * [kubernetes-upgrade-089000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:31:46.554412   12231 notify.go:220] Checking for updates...
	I0216 09:31:46.575203   12231 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:31:46.617108   12231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:31:46.659024   12231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:31:46.701118   12231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:31:46.721990   12231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:31:46.764058   12231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:31:46.785600   12231 config.go:182] Loaded profile config "missing-upgrade-161000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0216 09:31:46.785696   12231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:31:46.845297   12231 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:31:46.845507   12231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:31:46.960787   12231 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:122 SystemTime:2024-02-16 17:31:46.950027868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:31:47.003929   12231 out.go:177] * Using the docker driver based on user configuration
	I0216 09:31:47.025119   12231 start.go:299] selected driver: docker
	I0216 09:31:47.025135   12231 start.go:903] validating driver "docker" against <nil>
	I0216 09:31:47.025145   12231 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:31:47.028867   12231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:31:47.156382   12231 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:122 SystemTime:2024-02-16 17:31:47.140130729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:31:47.156774   12231 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 09:31:47.157080   12231 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 09:31:47.179035   12231 out.go:177] * Using Docker Desktop driver with root privileges
	I0216 09:31:47.199783   12231 cni.go:84] Creating CNI manager for ""
	I0216 09:31:47.199887   12231 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:31:47.199925   12231 start_flags.go:323] config:
	{Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:31:47.221899   12231 out.go:177] * Starting control plane node kubernetes-upgrade-089000 in cluster kubernetes-upgrade-089000
	I0216 09:31:47.263746   12231 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:31:47.284833   12231 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:31:47.326743   12231 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:31:47.326743   12231 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:31:47.326881   12231 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 09:31:47.326917   12231 cache.go:56] Caching tarball of preloaded images
	I0216 09:31:47.327340   12231 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:31:47.327420   12231 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 09:31:47.328895   12231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/config.json ...
	I0216 09:31:47.329130   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/config.json: {Name:mk00abfe8b39c2f5f97bb78892220d4db717a0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:31:47.390559   12231 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:31:47.390579   12231 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:31:47.390599   12231 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:31:47.390641   12231 start.go:365] acquiring machines lock for kubernetes-upgrade-089000: {Name:mk9449c9299f15a4a0c897976f1618cf30fb8a7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:31:47.390789   12231 start.go:369] acquired machines lock for "kubernetes-upgrade-089000" in 135.361µs
	I0216 09:31:47.390814   12231 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-089000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 09:31:47.390878   12231 start.go:125] createHost starting for "" (driver="docker")
	I0216 09:31:47.416914   12231 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 09:31:47.417267   12231 start.go:159] libmachine.API.Create for "kubernetes-upgrade-089000" (driver="docker")
	I0216 09:31:47.417309   12231 client.go:168] LocalClient.Create starting
	I0216 09:31:47.417533   12231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem
	I0216 09:31:47.417622   12231 main.go:141] libmachine: Decoding PEM data...
	I0216 09:31:47.417647   12231 main.go:141] libmachine: Parsing certificate...
	I0216 09:31:47.417711   12231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem
	I0216 09:31:47.417756   12231 main.go:141] libmachine: Decoding PEM data...
	I0216 09:31:47.417765   12231 main.go:141] libmachine: Parsing certificate...
	I0216 09:31:47.438703   12231 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-089000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 09:31:47.490811   12231 cli_runner.go:211] docker network inspect kubernetes-upgrade-089000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 09:31:47.490912   12231 network_create.go:281] running [docker network inspect kubernetes-upgrade-089000] to gather additional debugging logs...
	I0216 09:31:47.490929   12231 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-089000
	W0216 09:31:47.542376   12231 cli_runner.go:211] docker network inspect kubernetes-upgrade-089000 returned with exit code 1
	I0216 09:31:47.542410   12231 network_create.go:284] error running [docker network inspect kubernetes-upgrade-089000]: docker network inspect kubernetes-upgrade-089000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-089000 not found
	I0216 09:31:47.542427   12231 network_create.go:286] output of [docker network inspect kubernetes-upgrade-089000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-089000 not found
	
	** /stderr **
	I0216 09:31:47.542555   12231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 09:31:47.595375   12231 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:31:47.595764   12231 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002293c60}
	I0216 09:31:47.595780   12231 network_create.go:124] attempt to create docker network kubernetes-upgrade-089000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0216 09:31:47.595857   12231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 kubernetes-upgrade-089000
	W0216 09:31:47.647590   12231 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 kubernetes-upgrade-089000 returned with exit code 1
	W0216 09:31:47.647635   12231 network_create.go:149] failed to create docker network kubernetes-upgrade-089000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 kubernetes-upgrade-089000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0216 09:31:47.647661   12231 network_create.go:116] failed to create docker network kubernetes-upgrade-089000 192.168.58.0/24, will retry: subnet is taken
	I0216 09:31:47.649040   12231 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:31:47.649414   12231 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023c3ce0}
	I0216 09:31:47.649428   12231 network_create.go:124] attempt to create docker network kubernetes-upgrade-089000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0216 09:31:47.649492   12231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 kubernetes-upgrade-089000
	I0216 09:31:47.906719   12231 network_create.go:108] docker network kubernetes-upgrade-089000 192.168.67.0/24 created
	I0216 09:31:47.906758   12231 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-089000" container
	I0216 09:31:47.906892   12231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 09:31:47.959206   12231 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-089000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 --label created_by.minikube.sigs.k8s.io=true
	I0216 09:31:48.108795   12231 oci.go:103] Successfully created a docker volume kubernetes-upgrade-089000
	I0216 09:31:48.108989   12231 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-089000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 --entrypoint /usr/bin/test -v kubernetes-upgrade-089000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 09:31:48.686464   12231 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-089000
	I0216 09:31:48.686516   12231 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:31:48.686529   12231 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 09:31:48.686641   12231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-089000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 09:31:51.143882   12231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-089000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.456919534s)
	I0216 09:31:51.143910   12231 kic.go:203] duration metric: took 2.457403 seconds to extract preloaded images to volume
	I0216 09:31:51.144020   12231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 09:31:51.263326   12231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-089000 --name kubernetes-upgrade-089000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-089000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-089000 --network kubernetes-upgrade-089000 --ip 192.168.67.2 --volume kubernetes-upgrade-089000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 09:31:51.548824   12231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Running}}
	I0216 09:31:51.608809   12231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:31:51.669509   12231 cli_runner.go:164] Run: docker exec kubernetes-upgrade-089000 stat /var/lib/dpkg/alternatives/iptables
	I0216 09:31:51.808051   12231 oci.go:144] the created container "kubernetes-upgrade-089000" has a running status.
	I0216 09:31:51.808106   12231 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa...
	I0216 09:31:51.997061   12231 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 09:31:52.066551   12231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:31:52.123994   12231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 09:31:52.124024   12231 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-089000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 09:31:52.212916   12231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:31:52.265445   12231 machine.go:88] provisioning docker machine ...
	I0216 09:31:52.265511   12231 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-089000"
	I0216 09:31:52.265635   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:52.316736   12231 main.go:141] libmachine: Using SSH client type: native
	I0216 09:31:52.317072   12231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52087 <nil> <nil>}
	I0216 09:31:52.317088   12231 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-089000 && echo "kubernetes-upgrade-089000" | sudo tee /etc/hostname
	I0216 09:31:52.476894   12231 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-089000
	
	I0216 09:31:52.477003   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:52.531185   12231 main.go:141] libmachine: Using SSH client type: native
	I0216 09:31:52.531488   12231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52087 <nil> <nil>}
	I0216 09:31:52.531502   12231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-089000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-089000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-089000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:31:52.667599   12231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:31:52.667622   12231 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:31:52.667638   12231 ubuntu.go:177] setting up certificates
	I0216 09:31:52.667647   12231 provision.go:83] configureAuth start
	I0216 09:31:52.667736   12231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-089000
	I0216 09:31:52.726014   12231 provision.go:138] copyHostCerts
	I0216 09:31:52.726108   12231 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:31:52.726117   12231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:31:52.726248   12231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:31:52.726480   12231 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:31:52.726486   12231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:31:52.726558   12231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:31:52.726752   12231 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:31:52.726758   12231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:31:52.726821   12231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:31:52.726980   12231 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-089000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-089000]
	I0216 09:31:52.832040   12231 provision.go:172] copyRemoteCerts
	I0216 09:31:52.832168   12231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:31:52.832272   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:52.890111   12231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52087 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:31:52.991665   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:31:53.035373   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:31:53.082950   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0216 09:31:53.125334   12231 provision.go:86] duration metric: configureAuth took 457.673858ms
	I0216 09:31:53.125353   12231 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:31:53.125527   12231 config.go:182] Loaded profile config "kubernetes-upgrade-089000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 09:31:53.125614   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:53.256207   12231 main.go:141] libmachine: Using SSH client type: native
	I0216 09:31:53.256555   12231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52087 <nil> <nil>}
	I0216 09:31:53.256575   12231 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:31:53.396422   12231 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:31:53.396446   12231 ubuntu.go:71] root file system type: overlay
	I0216 09:31:53.396541   12231 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:31:53.396693   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:53.452083   12231 main.go:141] libmachine: Using SSH client type: native
	I0216 09:31:53.452387   12231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52087 <nil> <nil>}
	I0216 09:31:53.452441   12231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:31:53.614246   12231 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:31:53.614351   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:53.671196   12231 main.go:141] libmachine: Using SSH client type: native
	I0216 09:31:53.671510   12231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52087 <nil> <nil>}
	I0216 09:31:53.671527   12231 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:31:54.346557   12231 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:31:53.608720118 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 09:31:54.346585   12231 machine.go:91] provisioned docker machine in 2.081115517s
	I0216 09:31:54.346593   12231 client.go:171] LocalClient.Create took 6.929343757s
	I0216 09:31:54.346610   12231 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-089000" took 6.929411224s
	I0216 09:31:54.346621   12231 start.go:300] post-start starting for "kubernetes-upgrade-089000" (driver="docker")
	I0216 09:31:54.346632   12231 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:31:54.346709   12231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:31:54.346767   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:54.401448   12231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52087 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:31:54.502651   12231 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:31:54.506976   12231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:31:54.507007   12231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:31:54.507014   12231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:31:54.507020   12231 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:31:54.507030   12231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:31:54.507131   12231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:31:54.507305   12231 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:31:54.507480   12231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:31:54.522616   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:31:54.564272   12231 start.go:303] post-start completed in 217.643854ms
	I0216 09:31:54.564888   12231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-089000
	I0216 09:31:54.619239   12231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/config.json ...
	I0216 09:31:54.619721   12231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:31:54.619787   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:54.673888   12231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52087 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:31:54.766739   12231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:31:54.772063   12231 start.go:128] duration metric: createHost completed in 7.381235207s
	I0216 09:31:54.772086   12231 start.go:83] releasing machines lock for "kubernetes-upgrade-089000", held for 7.381358174s
	I0216 09:31:54.772195   12231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-089000
	I0216 09:31:54.826722   12231 ssh_runner.go:195] Run: cat /version.json
	I0216 09:31:54.826738   12231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:31:54.826794   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:54.826817   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:54.886409   12231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52087 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:31:54.886415   12231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52087 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:31:54.978810   12231 ssh_runner.go:195] Run: systemctl --version
	I0216 09:31:55.093711   12231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 09:31:55.099143   12231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 09:31:55.147112   12231 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 09:31:55.147195   12231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 09:31:55.177817   12231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 09:31:55.208580   12231 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 09:31:55.208596   12231 start.go:475] detecting cgroup driver to use...
	I0216 09:31:55.208614   12231 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:31:55.208716   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:31:55.237697   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 09:31:55.255512   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:31:55.274077   12231 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:31:55.274165   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:31:55.291870   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:31:55.309804   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:31:55.327885   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:31:55.346102   12231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:31:55.365093   12231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:31:55.383391   12231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:31:55.400294   12231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:31:55.416065   12231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:31:55.483118   12231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:31:55.567034   12231 start.go:475] detecting cgroup driver to use...
	I0216 09:31:55.567056   12231 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:31:55.567128   12231 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:31:55.589392   12231 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:31:55.589465   12231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:31:55.610754   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:31:55.648468   12231 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:31:55.654382   12231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:31:55.672514   12231 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:31:55.705779   12231 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:31:55.808389   12231 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:31:55.915111   12231 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:31:55.915277   12231 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:31:55.947097   12231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:31:56.009761   12231 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:31:56.282912   12231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:31:56.305482   12231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:31:56.373325   12231 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 09:31:56.373455   12231 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-089000 dig +short host.docker.internal
	I0216 09:31:56.495364   12231 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:31:56.495457   12231 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:31:56.501161   12231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:31:56.519747   12231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:31:56.576412   12231 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:31:56.576491   12231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:31:56.598886   12231 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:31:56.598905   12231 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:31:56.598973   12231 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:31:56.615346   12231 ssh_runner.go:195] Run: which lz4
	I0216 09:31:56.619816   12231 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 09:31:56.625256   12231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 09:31:56.625370   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 09:32:02.830653   12231 docker.go:649] Took 6.210945 seconds to copy over tarball
	I0216 09:32:02.830793   12231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 09:32:04.446206   12231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.615397434s)
	I0216 09:32:04.446223   12231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 09:32:04.496639   12231 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:32:04.511676   12231 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 09:32:04.540702   12231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:32:04.604412   12231 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:32:05.218989   12231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:32:05.238269   12231 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:32:05.238290   12231 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:32:05.238319   12231 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 09:32:05.247352   12231 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 09:32:05.247395   12231 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:32:05.247559   12231 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:32:05.247901   12231 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:32:05.248112   12231 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:32:05.248255   12231 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:32:05.248355   12231 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 09:32:05.248795   12231 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:32:05.255092   12231 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:32:05.256225   12231 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 09:32:05.256785   12231 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:32:05.256864   12231 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:32:05.257351   12231 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:32:05.257360   12231 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 09:32:05.257615   12231 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:32:05.257868   12231 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:32:07.159810   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:32:07.182764   12231 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 09:32:07.182833   12231 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:32:07.182952   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:32:07.201442   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 09:32:07.223703   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 09:32:07.245141   12231 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 09:32:07.245168   12231 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 09:32:07.245242   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 09:32:07.264374   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 09:32:07.285387   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 09:32:07.287242   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:32:07.299915   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:32:07.307909   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 09:32:07.312966   12231 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 09:32:07.312966   12231 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 09:32:07.313010   12231 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:32:07.313011   12231 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 09:32:07.313075   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:32:07.313075   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 09:32:07.314769   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:32:07.328375   12231 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 09:32:07.328410   12231 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:32:07.328495   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:32:07.338758   12231 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 09:32:07.338820   12231 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:32:07.338928   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 09:32:07.398602   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 09:32:07.398597   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 09:32:07.398700   12231 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 09:32:07.398743   12231 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:32:07.398848   12231 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:32:07.406350   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 09:32:07.418274   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 09:32:07.421387   12231 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 09:32:07.792495   12231 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:32:07.812831   12231 cache_images.go:92] LoadImages completed in 2.57451448s
	W0216 09:32:07.812940   12231 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0216 09:32:07.813057   12231 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:32:07.866154   12231 cni.go:84] Creating CNI manager for ""
	I0216 09:32:07.866207   12231 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:32:07.866223   12231 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:32:07.866243   12231 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-089000 NodeName:kubernetes-upgrade-089000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 09:32:07.866361   12231 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-089000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-089000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:32:07.866432   12231 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-089000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:32:07.866515   12231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 09:32:07.883556   12231 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:32:07.883627   12231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:32:07.902390   12231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0216 09:32:07.932532   12231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 09:32:07.963238   12231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0216 09:32:08.001182   12231 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:32:08.006895   12231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:32:08.027114   12231 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000 for IP: 192.168.67.2
	I0216 09:32:08.027142   12231 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.027315   12231 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:32:08.027366   12231 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:32:08.027411   12231 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key
	I0216 09:32:08.027425   12231 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt with IP's: []
	I0216 09:32:08.186117   12231 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt ...
	I0216 09:32:08.186133   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt: {Name:mk495fdba3249d4c1b14c14902102bc8618ed48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.186522   12231 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key ...
	I0216 09:32:08.186536   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key: {Name:mk620b9277e7b5dfc0dfb0f27f1b7ede83d913c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.186781   12231 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key.c7fa3a9e
	I0216 09:32:08.186796   12231 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 09:32:08.246529   12231 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt.c7fa3a9e ...
	I0216 09:32:08.246544   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt.c7fa3a9e: {Name:mk594566a36f800a498962c996abacb25a997df5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.246881   12231 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key.c7fa3a9e ...
	I0216 09:32:08.246891   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key.c7fa3a9e: {Name:mk0641fd888815406ebe1283106c5b7e434859fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.247122   12231 certs.go:337] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt
	I0216 09:32:08.247401   12231 certs.go:341] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key
	I0216 09:32:08.247612   12231 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key
	I0216 09:32:08.247626   12231 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.crt with IP's: []
	I0216 09:32:08.362763   12231 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.crt ...
	I0216 09:32:08.362781   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.crt: {Name:mkfd7356cd255658b066b48bd2ec0981cd9aeb0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.363106   12231 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key ...
	I0216 09:32:08.363116   12231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key: {Name:mk722051641aeecad2d6522dacc07d7b55924674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:32:08.363614   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:32:08.363659   12231 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:32:08.363673   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:32:08.363708   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:32:08.363743   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:32:08.363770   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:32:08.363835   12231 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:32:08.364446   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:32:08.409488   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 09:32:08.452980   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:32:08.503274   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 09:32:08.547734   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:32:08.601910   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:32:08.665004   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:32:08.709000   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:32:08.765982   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:32:08.806698   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:32:08.856419   12231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:32:08.901663   12231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:32:08.930659   12231 ssh_runner.go:195] Run: openssl version
	I0216 09:32:08.936965   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:32:08.953588   12231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:32:08.958276   12231 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:32:08.958328   12231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:32:08.965055   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:32:08.980911   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:32:08.996990   12231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:32:09.001581   12231 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:32:09.001648   12231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:32:09.008138   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:32:09.024400   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:32:09.040669   12231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:32:09.045269   12231 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:32:09.045331   12231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:32:09.052119   12231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:32:09.068024   12231 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:32:09.072207   12231 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 09:32:09.072256   12231 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:32:09.072347   12231 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:32:09.090720   12231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:32:09.106587   12231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:32:09.123202   12231 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:32:09.123269   12231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:32:09.139401   12231 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:32:09.139429   12231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:32:09.198666   12231 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:32:09.198746   12231 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:32:09.467021   12231 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:32:09.467116   12231 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:32:09.467299   12231 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:32:09.653526   12231 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:32:09.654631   12231 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:32:09.661959   12231 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:32:09.734539   12231 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:32:09.796880   12231 out.go:204]   - Generating certificates and keys ...
	I0216 09:32:09.797007   12231 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:32:09.797135   12231 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:32:09.831409   12231 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 09:32:10.055282   12231 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 09:32:10.225846   12231 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 09:32:10.520889   12231 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 09:32:10.713449   12231 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 09:32:10.713573   12231 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 09:32:10.827544   12231 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 09:32:10.827699   12231 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 09:32:10.923121   12231 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 09:32:10.965146   12231 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 09:32:11.124676   12231 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 09:32:11.124725   12231 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:32:11.373428   12231 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:32:11.540633   12231 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:32:11.786318   12231 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:32:12.368495   12231 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:32:12.369322   12231 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:32:12.407557   12231 out.go:204]   - Booting up control plane ...
	I0216 09:32:12.407708   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:32:12.407843   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:32:12.407966   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:32:12.408122   12231 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:32:12.408421   12231 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:32:52.378936   12231 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:32:52.380062   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:32:52.380340   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:32:57.381153   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:32:57.381307   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:33:07.383046   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:33:07.383280   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:33:27.385155   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:33:27.385363   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:34:07.387173   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:34:07.387371   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:34:07.387382   12231 kubeadm.go:322] 
	I0216 09:34:07.387408   12231 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:34:07.387440   12231 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:34:07.387449   12231 kubeadm.go:322] 
	I0216 09:34:07.387482   12231 kubeadm.go:322] This error is likely caused by:
	I0216 09:34:07.387507   12231 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:34:07.387593   12231 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:34:07.387604   12231 kubeadm.go:322] 
	I0216 09:34:07.387681   12231 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:34:07.387708   12231 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:34:07.387732   12231 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:34:07.387736   12231 kubeadm.go:322] 
	I0216 09:34:07.387823   12231 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:34:07.387901   12231 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:34:07.387973   12231 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:34:07.388012   12231 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:34:07.388077   12231 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:34:07.388104   12231 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:34:07.392480   12231 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:34:07.392548   12231 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:34:07.392661   12231 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:34:07.392743   12231 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:34:07.392814   12231 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:34:07.392875   12231 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 09:34:07.392944   12231 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-089000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 09:34:07.392976   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:34:07.811670   12231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:34:07.828813   12231 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:34:07.828876   12231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:34:07.843674   12231 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:34:07.843704   12231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:34:07.900074   12231 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:34:07.900125   12231 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:34:08.130942   12231 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:34:08.131056   12231 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:34:08.131204   12231 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:34:08.288645   12231 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:34:08.290734   12231 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:34:08.297077   12231 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:34:08.365593   12231 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:34:08.386792   12231 out.go:204]   - Generating certificates and keys ...
	I0216 09:34:08.386856   12231 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:34:08.386914   12231 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:34:08.386974   12231 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:34:08.387016   12231 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:34:08.387067   12231 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:34:08.387107   12231 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:34:08.387162   12231 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:34:08.387231   12231 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:34:08.387305   12231 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:34:08.387356   12231 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:34:08.387389   12231 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:34:08.387444   12231 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:34:08.484173   12231 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:34:08.667311   12231 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:34:08.813972   12231 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:34:08.945403   12231 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:34:08.946174   12231 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:34:08.967743   12231 out.go:204]   - Booting up control plane ...
	I0216 09:34:08.968074   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:34:08.968137   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:34:08.968195   12231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:34:08.968258   12231 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:34:08.968395   12231 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:34:48.954792   12231 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:34:48.955110   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:34:48.955314   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:34:53.956730   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:34:53.956896   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:35:03.958070   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:35:03.958329   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:35:23.959176   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:35:23.959465   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:36:03.960307   12231 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:36:03.960471   12231 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:36:03.960482   12231 kubeadm.go:322] 
	I0216 09:36:03.960513   12231 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:36:03.960552   12231 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:36:03.960565   12231 kubeadm.go:322] 
	I0216 09:36:03.960596   12231 kubeadm.go:322] This error is likely caused by:
	I0216 09:36:03.960630   12231 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:36:03.960723   12231 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:36:03.960734   12231 kubeadm.go:322] 
	I0216 09:36:03.960850   12231 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:36:03.960880   12231 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:36:03.960902   12231 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:36:03.960906   12231 kubeadm.go:322] 
	I0216 09:36:03.960976   12231 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:36:03.961077   12231 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:36:03.961166   12231 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:36:03.961225   12231 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:36:03.961316   12231 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:36:03.961380   12231 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:36:03.965385   12231 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:36:03.965468   12231 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:36:03.965570   12231 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:36:03.965654   12231 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:36:03.965716   12231 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:36:03.965781   12231 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 09:36:03.965824   12231 kubeadm.go:406] StartCluster complete in 3m54.895774412s
	I0216 09:36:03.965930   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:36:04.005626   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.005642   12231 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:36:04.005715   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:36:04.023799   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.023816   12231 logs.go:278] No container was found matching "etcd"
	I0216 09:36:04.023894   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:36:04.041046   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.041061   12231 logs.go:278] No container was found matching "coredns"
	I0216 09:36:04.041133   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:36:04.060544   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.060562   12231 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:36:04.060648   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:36:04.080471   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.080503   12231 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:36:04.080569   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:36:04.098886   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.098900   12231 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:36:04.098969   12231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:36:04.118135   12231 logs.go:276] 0 containers: []
	W0216 09:36:04.118149   12231 logs.go:278] No container was found matching "kindnet"
	I0216 09:36:04.118156   12231 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:36:04.118164   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:36:04.215433   12231 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:36:04.215445   12231 logs.go:123] Gathering logs for Docker ...
	I0216 09:36:04.215453   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:36:04.237974   12231 logs.go:123] Gathering logs for container status ...
	I0216 09:36:04.237990   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:36:04.302364   12231 logs.go:123] Gathering logs for kubelet ...
	I0216 09:36:04.302379   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:36:04.345208   12231 logs.go:123] Gathering logs for dmesg ...
	I0216 09:36:04.345225   12231 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0216 09:36:04.365613   12231 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 09:36:04.365659   12231 out.go:239] * 
	* 
	W0216 09:36:04.365700   12231 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:36:04.365715   12231 out.go:239] * 
	* 
	W0216 09:36:04.366556   12231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 09:36:04.429012   12231 out.go:177] 
	W0216 09:36:04.471960   12231 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:36:04.472030   12231 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 09:36:04.472047   12231 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 09:36:04.534838   12231 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-089000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-089000: (1.56435203s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-089000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-089000 status --format={{.Host}}: exit status 7 (108.152566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (4m35.631246818s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-089000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (505.200585ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-089000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-089000
	    minikube start -p kubernetes-upgrade-089000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0890002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-089000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-089000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (34.463275795s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-16 09:41:17.001914 -0800 PST m=+3600.713017347
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-089000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-089000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84",
	        "Created": "2024-02-16T17:31:51.318801642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252997,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:36:07.927691035Z",
	            "FinishedAt": "2024-02-16T17:36:05.079964305Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84/hostname",
	        "HostsPath": "/var/lib/docker/containers/6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84/hosts",
	        "LogPath": "/var/lib/docker/containers/6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84/6d7a1208fb7ee4291aa47ed886ddf591ce7568356d829581568caaf87080bc84-json.log",
	        "Name": "/kubernetes-upgrade-089000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-089000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-089000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c7a1a24979fe0d3403d96c03a4cecd7cd90446268e5db3f3aa7bfedbb849e665-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7a1a24979fe0d3403d96c03a4cecd7cd90446268e5db3f3aa7bfedbb849e665/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7a1a24979fe0d3403d96c03a4cecd7cd90446268e5db3f3aa7bfedbb849e665/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7a1a24979fe0d3403d96c03a4cecd7cd90446268e5db3f3aa7bfedbb849e665/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-089000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-089000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-089000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-089000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-089000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "add6cc7c8da32e52287a47bc57df6b59df2baa8bcdf357abb157e85366c73ee0",
	            "SandboxKey": "/var/run/docker/netns/add6cc7c8da3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52352"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-089000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6d7a1208fb7e",
	                        "kubernetes-upgrade-089000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "f635bb53bc94da9513e4e387592068f29b5c501cfd078d5fd3cbb5ff2721fb0f",
	                    "EndpointID": "a5ae508f5d5975f97fe782802b3b63ddb4fcce35ce0ca144e61dd97a2efee21f",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-089000",
	                        "6d7a1208fb7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-089000 -n kubernetes-upgrade-089000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-089000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-089000 logs -n 25: (3.16571101s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-862000 sudo iptables                       | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo docker                         | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo cat                            | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo                                | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo find                           | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p calico-862000 sudo crio                           | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST | 16 Feb 24 09:41 PST |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p calico-862000                                     | calico-862000 | jenkins | v1.32.0 | 16 Feb 24 09:41 PST |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 09:40:42
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 09:40:42.571075   14599 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:40:42.571262   14599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:40:42.571269   14599 out.go:304] Setting ErrFile to fd 2...
	I0216 09:40:42.571273   14599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:40:42.571453   14599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:40:42.572821   14599 out.go:298] Setting JSON to false
	I0216 09:40:42.595906   14599 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4213,"bootTime":1708101029,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:40:42.596026   14599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:40:42.616997   14599 out.go:177] * [kubernetes-upgrade-089000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:40:42.658910   14599 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:40:42.658973   14599 notify.go:220] Checking for updates...
	I0216 09:40:42.716840   14599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:40:42.792789   14599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:40:42.834838   14599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:40:42.876867   14599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:40:42.897880   14599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:40:42.919237   14599 config.go:182] Loaded profile config "kubernetes-upgrade-089000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 09:40:42.919775   14599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:40:42.980650   14599 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:40:42.980813   14599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:40:43.088664   14599 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-16 17:40:43.078383163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:40:43.133092   14599 out.go:177] * Using the docker driver based on existing profile
	I0216 09:40:43.154046   14599 start.go:299] selected driver: docker
	I0216 09:40:43.154061   14599 start.go:903] validating driver "docker" against &{Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-089000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:40:43.154135   14599 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:40:43.157580   14599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:40:43.267192   14599 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-16 17:40:43.256550276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:40:43.267456   14599 cni.go:84] Creating CNI manager for ""
	I0216 09:40:43.267471   14599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:40:43.267482   14599 start_flags.go:323] config:
	{Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:40:43.325932   14599 out.go:177] * Starting control plane node kubernetes-upgrade-089000 in cluster kubernetes-upgrade-089000
	I0216 09:40:43.363172   14599 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:40:43.384099   14599 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:40:43.426094   14599 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 09:40:43.426138   14599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:40:43.426155   14599 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 09:40:43.426170   14599 cache.go:56] Caching tarball of preloaded images
	I0216 09:40:43.426298   14599 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:40:43.426310   14599 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0216 09:40:43.426370   14599 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/config.json ...
	I0216 09:40:43.487595   14599 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:40:43.487621   14599 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:40:43.487647   14599 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:40:43.487690   14599 start.go:365] acquiring machines lock for kubernetes-upgrade-089000: {Name:mk9449c9299f15a4a0c897976f1618cf30fb8a7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:40:43.487784   14599 start.go:369] acquired machines lock for "kubernetes-upgrade-089000" in 73.066µs
	I0216 09:40:43.487808   14599 start.go:96] Skipping create...Using existing machine configuration
	I0216 09:40:43.487818   14599 fix.go:54] fixHost starting: 
	I0216 09:40:43.488133   14599 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:40:43.542521   14599 fix.go:102] recreateIfNeeded on kubernetes-upgrade-089000: state=Running err=<nil>
	W0216 09:40:43.542557   14599 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 09:40:43.564138   14599 out.go:177] * Updating the running docker "kubernetes-upgrade-089000" container ...
	I0216 09:40:43.605940   14599 machine.go:88] provisioning docker machine ...
	I0216 09:40:43.605985   14599 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-089000"
	I0216 09:40:43.606113   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:43.659129   14599 main.go:141] libmachine: Using SSH client type: native
	I0216 09:40:43.659481   14599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52348 <nil> <nil>}
	I0216 09:40:43.659500   14599 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-089000 && echo "kubernetes-upgrade-089000" | sudo tee /etc/hostname
	I0216 09:40:43.820523   14599 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-089000
	
	I0216 09:40:43.820705   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:43.874002   14599 main.go:141] libmachine: Using SSH client type: native
	I0216 09:40:43.874299   14599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52348 <nil> <nil>}
	I0216 09:40:43.874313   14599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-089000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-089000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-089000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:40:44.009709   14599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:40:44.009730   14599 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:40:44.009752   14599 ubuntu.go:177] setting up certificates
	I0216 09:40:44.009764   14599 provision.go:83] configureAuth start
	I0216 09:40:44.009842   14599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-089000
	I0216 09:40:44.064433   14599 provision.go:138] copyHostCerts
	I0216 09:40:44.064543   14599 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:40:44.064552   14599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:40:44.064681   14599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:40:44.064931   14599 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:40:44.064938   14599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:40:44.065015   14599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:40:44.065218   14599 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:40:44.065224   14599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:40:44.065300   14599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:40:44.065446   14599 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-089000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-089000]
	I0216 09:40:44.359727   14599 provision.go:172] copyRemoteCerts
	I0216 09:40:44.359806   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:40:44.359860   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:44.416162   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:40:44.519336   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:40:44.560840   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0216 09:40:44.602540   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:40:44.644571   14599 provision.go:86] duration metric: configureAuth took 634.799365ms
	I0216 09:40:44.644585   14599 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:40:44.644738   14599 config.go:182] Loaded profile config "kubernetes-upgrade-089000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 09:40:44.644802   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:44.698021   14599 main.go:141] libmachine: Using SSH client type: native
	I0216 09:40:44.698314   14599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52348 <nil> <nil>}
	I0216 09:40:44.698323   14599 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:40:44.837453   14599 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:40:44.837473   14599 ubuntu.go:71] root file system type: overlay
	I0216 09:40:44.837558   14599 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:40:44.837644   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:44.890945   14599 main.go:141] libmachine: Using SSH client type: native
	I0216 09:40:44.891253   14599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52348 <nil> <nil>}
	I0216 09:40:44.891302   14599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:40:45.051240   14599 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:40:45.051342   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:45.105063   14599 main.go:141] libmachine: Using SSH client type: native
	I0216 09:40:45.105373   14599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 52348 <nil> <nil>}
	I0216 09:40:45.105387   14599 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:40:45.251119   14599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:40:45.251144   14599 machine.go:91] provisioned docker machine in 1.645203048s
	I0216 09:40:45.251155   14599 start.go:300] post-start starting for "kubernetes-upgrade-089000" (driver="docker")
	I0216 09:40:45.251168   14599 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:40:45.251246   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:40:45.251313   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:45.305729   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:40:45.406270   14599 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:40:45.410734   14599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:40:45.410760   14599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:40:45.410770   14599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:40:45.410776   14599 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:40:45.410784   14599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:40:45.410885   14599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:40:45.411109   14599 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:40:45.411329   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:40:45.426527   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:40:45.468082   14599 start.go:303] post-start completed in 216.901255ms
	I0216 09:40:45.468191   14599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:40:45.468309   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:45.523421   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:40:45.617774   14599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:40:45.623557   14599 fix.go:56] fixHost completed within 2.135755108s
	I0216 09:40:45.623570   14599 start.go:83] releasing machines lock for "kubernetes-upgrade-089000", held for 2.135798236s
	I0216 09:40:45.623669   14599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-089000
	I0216 09:40:45.675781   14599 ssh_runner.go:195] Run: cat /version.json
	I0216 09:40:45.675789   14599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:40:45.675859   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:45.675873   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:45.733572   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:40:45.734018   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:40:45.934039   14599 ssh_runner.go:195] Run: systemctl --version
	I0216 09:40:45.939043   14599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 09:40:45.944254   14599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 09:40:45.944309   14599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 09:40:45.959632   14599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 09:40:45.974606   14599 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 09:40:45.974621   14599 start.go:475] detecting cgroup driver to use...
	I0216 09:40:45.974637   14599 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:40:45.974749   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:40:46.004345   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 09:40:46.021704   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:40:46.038144   14599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:40:46.038275   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:40:46.055389   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:40:46.071705   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:40:46.088755   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:40:46.107399   14599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:40:46.123967   14599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:40:46.140510   14599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:40:46.155790   14599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:40:46.170867   14599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:40:46.249398   14599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:40:56.414527   14599 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.165184211s)
	I0216 09:40:56.414583   14599 start.go:475] detecting cgroup driver to use...
	I0216 09:40:56.414595   14599 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:40:56.414669   14599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:40:56.433640   14599 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:40:56.433710   14599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:40:56.452548   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:40:56.484460   14599 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:40:56.489509   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:40:56.507395   14599 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:40:56.559949   14599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:40:56.653926   14599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:40:56.724233   14599 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:40:56.724334   14599 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:40:56.778251   14599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:40:56.885084   14599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:40:57.274016   14599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 09:40:57.292363   14599 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 09:40:57.313442   14599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:40:57.330897   14599 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 09:40:57.400369   14599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 09:40:57.464962   14599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:40:57.528592   14599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 09:40:57.562933   14599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:40:57.580348   14599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:40:57.663205   14599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 09:40:57.758523   14599 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 09:40:57.758616   14599 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 09:40:57.763349   14599 start.go:543] Will wait 60s for crictl version
	I0216 09:40:57.763412   14599 ssh_runner.go:195] Run: which crictl
	I0216 09:40:57.767781   14599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 09:40:57.822204   14599 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 09:40:57.822288   14599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:40:57.845434   14599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:40:57.891689   14599 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0216 09:40:57.891775   14599 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-089000 dig +short host.docker.internal
	I0216 09:40:57.996142   14599 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:40:57.996265   14599 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:40:58.001069   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:58.053963   14599 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 09:40:58.054051   14599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:40:58.074207   14599 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:40:58.074228   14599 docker.go:615] Images already preloaded, skipping extraction
	I0216 09:40:58.074301   14599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:40:58.092812   14599 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:40:58.092828   14599 cache_images.go:84] Images are preloaded, skipping loading
	I0216 09:40:58.092921   14599 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:40:58.141977   14599 cni.go:84] Creating CNI manager for ""
	I0216 09:40:58.141998   14599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:40:58.142013   14599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:40:58.142031   14599 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-089000 NodeName:kubernetes-upgrade-089000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 09:40:58.142155   14599 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-089000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:40:58.142229   14599 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-089000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:40:58.142293   14599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0216 09:40:58.157609   14599 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:40:58.157702   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:40:58.172624   14599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0216 09:40:58.201819   14599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0216 09:40:58.231097   14599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0216 09:40:58.260622   14599 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:40:58.265098   14599 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000 for IP: 192.168.67.2
	I0216 09:40:58.265117   14599 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:40:58.265294   14599 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:40:58.265380   14599 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:40:58.265468   14599 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key
	I0216 09:40:58.265602   14599 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key.c7fa3a9e
	I0216 09:40:58.265700   14599 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key
	I0216 09:40:58.265933   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:40:58.266008   14599 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:40:58.266018   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:40:58.266058   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:40:58.266094   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:40:58.266142   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:40:58.266217   14599 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:40:58.266806   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:40:58.307757   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 09:40:58.348406   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:40:58.390236   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 09:40:58.434085   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:40:58.475602   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:40:58.515724   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:40:58.556058   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:40:58.596628   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:40:58.638486   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:40:58.679376   14599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:40:58.720140   14599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:40:58.749634   14599 ssh_runner.go:195] Run: openssl version
	I0216 09:40:58.757310   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:40:58.774276   14599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:40:58.778828   14599 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:40:58.778879   14599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:40:58.785858   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:40:58.801280   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:40:58.817415   14599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:40:58.821937   14599 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:40:58.821996   14599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:40:58.829200   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:40:58.844597   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:40:58.860306   14599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:40:58.864537   14599 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:40:58.864588   14599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:40:58.871302   14599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:40:58.886463   14599 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:40:58.890939   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 09:40:58.898393   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 09:40:58.905336   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 09:40:58.911867   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 09:40:58.918817   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 09:40:58.925672   14599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 09:40:58.932358   14599 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-089000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:40:58.932478   14599 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:40:58.949044   14599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:40:58.964818   14599 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 09:40:58.964839   14599 kubeadm.go:636] restartCluster start
	I0216 09:40:58.964903   14599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 09:40:58.980830   14599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:40:58.980920   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:40:59.034726   14599 kubeconfig.go:92] found "kubernetes-upgrade-089000" server: "https://127.0.0.1:52352"
	I0216 09:40:59.035490   14599 kapi.go:59] client config for kubernetes-upgrade-089000: &rest.Config{Host:"https://127.0.0.1:52352", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key", CAFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f81c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 09:40:59.036154   14599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 09:40:59.051658   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:40:59.051754   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:40:59.068412   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:40:59.552123   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:40:59.552199   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:40:59.569747   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:00.052262   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:00.052384   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:00.069050   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:00.552005   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:00.552102   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:00.571192   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:01.052139   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:01.052212   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:01.072140   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:01.551807   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:01.551935   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:01.571372   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:02.052392   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:02.052466   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:02.071633   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:02.552267   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:02.552405   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:41:02.571693   14599 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:03.051905   14599 api_server.go:166] Checking apiserver status ...
	I0216 09:41:03.052016   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:41:03.082568   14599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14168/cgroup
	W0216 09:41:03.170476   14599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:03.170567   14599 ssh_runner.go:195] Run: ls
	I0216 09:41:03.177404   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:03.179485   14599 api_server.go:269] stopped: https://127.0.0.1:52352/healthz: Get "https://127.0.0.1:52352/healthz": EOF
	I0216 09:41:03.179561   14599 retry.go:31] will retry after 292.231494ms: state is "Stopped"
	I0216 09:41:03.471932   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:05.475091   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 09:41:05.475124   14599 retry.go:31] will retry after 260.819037ms: https://127.0.0.1:52352/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 09:41:05.736970   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:05.743198   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:05.743225   14599 retry.go:31] will retry after 401.40661ms: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:06.144822   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:06.150307   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:06.150328   14599 retry.go:31] will retry after 470.981728ms: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:06.621389   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:06.628219   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:06.628240   14599 retry.go:31] will retry after 625.244337ms: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:07.254245   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:07.261023   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 200:
	ok
	I0216 09:41:07.276176   14599 system_pods.go:86] 5 kube-system pods found
	I0216 09:41:07.276198   14599 system_pods.go:89] "etcd-kubernetes-upgrade-089000" [5a0d4de2-e3a6-4d47-8ce2-e4f0e189a682] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 09:41:07.276204   14599 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-089000" [2c06b977-77b8-4d1e-b022-d082012c4ddd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 09:41:07.276221   14599 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-089000" [8cee4102-833e-4480-8909-a4712909727d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 09:41:07.276228   14599 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-089000" [76db8f8b-af4c-4222-9542-7c2f9c4fb4ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 09:41:07.276235   14599 system_pods.go:89] "storage-provisioner" [b5aff38c-a359-42dc-8e53-e3a472aaca8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 09:41:07.276242   14599 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0216 09:41:07.276250   14599 kubeadm.go:1135] stopping kube-system containers ...
	I0216 09:41:07.276318   14599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:41:07.298026   14599 docker.go:483] Stopping containers: [7338c03f00e8 f9f5c860e04d 4d4c459414ac 9a9002bd67d8 31d4f85e5bea 2c050ffceeef 9b742f05c73a 802d6649ba38 e429e6f5a60f 322df95b6ebb bf682e6f4d41 ff30ba879a04 76200f435b46 d358de985b7d f5adf71031c1 e3e5547a260a 03184a7a0e8e 3ee85d7d1aa6]
	I0216 09:41:07.298103   14599 ssh_runner.go:195] Run: docker stop 7338c03f00e8 f9f5c860e04d 4d4c459414ac 9a9002bd67d8 31d4f85e5bea 2c050ffceeef 9b742f05c73a 802d6649ba38 e429e6f5a60f 322df95b6ebb bf682e6f4d41 ff30ba879a04 76200f435b46 d358de985b7d f5adf71031c1 e3e5547a260a 03184a7a0e8e 3ee85d7d1aa6
	I0216 09:41:08.409924   14599 ssh_runner.go:235] Completed: docker stop 7338c03f00e8 f9f5c860e04d 4d4c459414ac 9a9002bd67d8 31d4f85e5bea 2c050ffceeef 9b742f05c73a 802d6649ba38 e429e6f5a60f 322df95b6ebb bf682e6f4d41 ff30ba879a04 76200f435b46 d358de985b7d f5adf71031c1 e3e5547a260a 03184a7a0e8e 3ee85d7d1aa6: (1.111779303s)
	I0216 09:41:08.410070   14599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 09:41:08.475596   14599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:41:08.552215   14599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 16 17:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 16 17:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 16 17:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 16 17:40 /etc/kubernetes/scheduler.conf
	
	I0216 09:41:08.552336   14599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 09:41:08.573141   14599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 09:41:08.595263   14599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 09:41:08.615737   14599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:08.615829   14599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 09:41:08.662595   14599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 09:41:08.680252   14599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:41:08.680312   14599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 09:41:08.697629   14599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:41:08.714725   14599 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 09:41:08.714781   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:08.776920   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:09.569590   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:09.733736   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:09.805535   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:09.896175   14599 api_server.go:52] waiting for apiserver process to appear ...
	I0216 09:41:09.896256   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:41:10.396847   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:41:10.896699   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:41:10.962417   14599 api_server.go:72] duration metric: took 1.066254691s to wait for apiserver process to appear ...
	I0216 09:41:10.962436   14599 api_server.go:88] waiting for apiserver healthz status ...
	I0216 09:41:10.962453   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:13.889522   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 09:41:13.889540   14599 api_server.go:103] status: https://127.0.0.1:52352/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 09:41:13.889552   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:13.961445   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:41:13.961467   14599 api_server.go:103] status: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:13.962698   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:13.970106   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:41:13.970149   14599 api_server.go:103] status: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:14.463092   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:14.467866   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:41:14.467878   14599 api_server.go:103] status: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:14.962762   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:14.970561   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:41:14.970582   14599 api_server.go:103] status: https://127.0.0.1:52352/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:41:15.463873   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:15.469575   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 200:
	ok
	I0216 09:41:15.477312   14599 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 09:41:15.477344   14599 api_server.go:131] duration metric: took 4.51494373s to wait for apiserver health ...
	I0216 09:41:15.477352   14599 cni.go:84] Creating CNI manager for ""
	I0216 09:41:15.477363   14599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:41:15.498867   14599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 09:41:15.520763   14599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 09:41:15.541946   14599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 09:41:15.575703   14599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 09:41:15.582444   14599 system_pods.go:59] 5 kube-system pods found
	I0216 09:41:15.582462   14599 system_pods.go:61] "etcd-kubernetes-upgrade-089000" [5a0d4de2-e3a6-4d47-8ce2-e4f0e189a682] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 09:41:15.582470   14599 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-089000" [2c06b977-77b8-4d1e-b022-d082012c4ddd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 09:41:15.582520   14599 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-089000" [8cee4102-833e-4480-8909-a4712909727d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 09:41:15.582539   14599 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-089000" [76db8f8b-af4c-4222-9542-7c2f9c4fb4ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 09:41:15.582544   14599 system_pods.go:61] "storage-provisioner" [b5aff38c-a359-42dc-8e53-e3a472aaca8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 09:41:15.582550   14599 system_pods.go:74] duration metric: took 6.815247ms to wait for pod list to return data ...
	I0216 09:41:15.582558   14599 node_conditions.go:102] verifying NodePressure condition ...
	I0216 09:41:15.586272   14599 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 09:41:15.586293   14599 node_conditions.go:123] node cpu capacity is 12
	I0216 09:41:15.586303   14599 node_conditions.go:105] duration metric: took 3.74039ms to run NodePressure ...
	I0216 09:41:15.586315   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:41:15.856394   14599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 09:41:15.866125   14599 ops.go:34] apiserver oom_adj: -16
	I0216 09:41:15.866140   14599 kubeadm.go:640] restartCluster took 16.901451233s
	I0216 09:41:15.866147   14599 kubeadm.go:406] StartCluster complete in 16.933955112s
	I0216 09:41:15.866160   14599 settings.go:142] acquiring lock: {Name:mk797212e07e7fce370dcd397d90efd277229019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:41:15.866243   14599 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:41:15.866907   14599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:41:15.867523   14599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 09:41:15.867604   14599 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 09:41:15.867681   14599 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-089000"
	I0216 09:41:15.867700   14599 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-089000"
	I0216 09:41:15.867732   14599 config.go:182] Loaded profile config "kubernetes-upgrade-089000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	W0216 09:41:15.867732   14599 addons.go:243] addon storage-provisioner should already be in state true
	I0216 09:41:15.867738   14599 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-089000"
	I0216 09:41:15.867795   14599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-089000"
	I0216 09:41:15.867801   14599 host.go:66] Checking if "kubernetes-upgrade-089000" exists ...
	I0216 09:41:15.868105   14599 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:41:15.868228   14599 kapi.go:59] client config for kubernetes-upgrade-089000: &rest.Config{Host:"https://127.0.0.1:52352", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key", CAFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f81c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 09:41:15.868827   14599 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:41:15.875318   14599 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-089000" context rescaled to 1 replicas
	I0216 09:41:15.875370   14599 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 09:41:15.896509   14599 out.go:177] * Verifying Kubernetes components...
	I0216 09:41:15.939581   14599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:41:15.970640   14599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:41:15.950880   14599 kapi.go:59] client config for kubernetes-upgrade-089000: &rest.Config{Host:"https://127.0.0.1:52352", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubernetes-upgrade-089000/client.key", CAFile:"/Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f81c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 09:41:15.968174   14599 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0216 09:41:15.969175   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:41:16.007571   14599 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 09:41:16.007588   14599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 09:41:16.007688   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:41:16.008598   14599 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-089000"
	W0216 09:41:16.008673   14599 addons.go:243] addon default-storageclass should already be in state true
	I0216 09:41:16.008734   14599 host.go:66] Checking if "kubernetes-upgrade-089000" exists ...
	I0216 09:41:16.010974   14599 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-089000 --format={{.State.Status}}
	I0216 09:41:16.086480   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:41:16.086512   14599 api_server.go:52] waiting for apiserver process to appear ...
	I0216 09:41:16.086657   14599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:41:16.091647   14599 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 09:41:16.091664   14599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 09:41:16.091755   14599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-089000
	I0216 09:41:16.112400   14599 api_server.go:72] duration metric: took 237.001414ms to wait for apiserver process to appear ...
	I0216 09:41:16.112432   14599 api_server.go:88] waiting for apiserver healthz status ...
	I0216 09:41:16.112452   14599 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52352/healthz ...
	I0216 09:41:16.120658   14599 api_server.go:279] https://127.0.0.1:52352/healthz returned 200:
	ok
	I0216 09:41:16.122665   14599 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 09:41:16.122678   14599 api_server.go:131] duration metric: took 10.240831ms to wait for apiserver health ...
	I0216 09:41:16.122684   14599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 09:41:16.128742   14599 system_pods.go:59] 5 kube-system pods found
	I0216 09:41:16.128766   14599 system_pods.go:61] "etcd-kubernetes-upgrade-089000" [5a0d4de2-e3a6-4d47-8ce2-e4f0e189a682] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 09:41:16.128776   14599 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-089000" [2c06b977-77b8-4d1e-b022-d082012c4ddd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 09:41:16.128795   14599 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-089000" [8cee4102-833e-4480-8909-a4712909727d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 09:41:16.128814   14599 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-089000" [76db8f8b-af4c-4222-9542-7c2f9c4fb4ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 09:41:16.128825   14599 system_pods.go:61] "storage-provisioner" [b5aff38c-a359-42dc-8e53-e3a472aaca8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 09:41:16.128833   14599 system_pods.go:74] duration metric: took 6.142612ms to wait for pod list to return data ...
	I0216 09:41:16.128843   14599 kubeadm.go:581] duration metric: took 253.449861ms to wait for : map[apiserver:true system_pods:true] ...
	I0216 09:41:16.128856   14599 node_conditions.go:102] verifying NodePressure condition ...
	I0216 09:41:16.132930   14599 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 09:41:16.132950   14599 node_conditions.go:123] node cpu capacity is 12
	I0216 09:41:16.132969   14599 node_conditions.go:105] duration metric: took 4.107494ms to run NodePressure ...
	I0216 09:41:16.132979   14599 start.go:228] waiting for startup goroutines ...
	I0216 09:41:16.161496   14599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52348 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/kubernetes-upgrade-089000/id_rsa Username:docker}
	I0216 09:41:16.212779   14599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 09:41:16.282818   14599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 09:41:16.809636   14599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0216 09:41:16.851245   14599 addons.go:505] enable addons completed in 983.672646ms: enabled=[storage-provisioner default-storageclass]
	I0216 09:41:16.851280   14599 start.go:233] waiting for cluster config update ...
	I0216 09:41:16.851304   14599 start.go:242] writing updated cluster config ...
	I0216 09:41:16.851711   14599 ssh_runner.go:195] Run: rm -f paused
	I0216 09:41:16.897927   14599 start.go:601] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0216 09:41:16.919267   14599 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-089000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:40:57 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:40:57Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 16 17:40:57 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:40:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 16 17:40:57 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:40:57Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 16 17:40:57 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:40:57Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 16 17:40:57 kubernetes-upgrade-089000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 16 17:41:02 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b742f05c73ad4e86260fc88f838ce7ef128b6b000b94af6ff7f5e6ef3967d5e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:02 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c050ffceeefffd25ddbecfa5748f2c622a95e3dc6ce5e05f829cc465b1ebb24/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:02 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31d4f85e5bead2f2afa861905387aa7041b62a6b352e4df47199aa05d0084745/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:02 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/802d6649ba38b84fc1c8b31cbbe951bdf311e23564563ca8000942378ca0ca69/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.370655964Z" level=info msg="ignoring event" container=9a9002bd67d809adc7c67f9ba142e99a2493b532ff3653d70e169998a9f40204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.370839819Z" level=info msg="ignoring event" container=31d4f85e5bead2f2afa861905387aa7041b62a6b352e4df47199aa05d0084745 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.371114976Z" level=info msg="ignoring event" container=2c050ffceeefffd25ddbecfa5748f2c622a95e3dc6ce5e05f829cc465b1ebb24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.446437254Z" level=info msg="ignoring event" container=9b742f05c73ad4e86260fc88f838ce7ef128b6b000b94af6ff7f5e6ef3967d5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.446558558Z" level=info msg="ignoring event" container=4d4c459414ac2b4de49212ccacb505a9680c6064f33cedc9c8b3bfc07bb22513 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.448668367Z" level=info msg="ignoring event" container=802d6649ba38b84fc1c8b31cbbe951bdf311e23564563ca8000942378ca0ca69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:07 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:07.448753590Z" level=info msg="ignoring event" container=7338c03f00e8af3652524a302f471d5d5f7a0ae0d8006cbfb959b14fe644e4b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:08 kubernetes-upgrade-089000 dockerd[13288]: time="2024-02-16T17:41:08.359555338Z" level=info msg="ignoring event" container=f9f5c860e04d0729831c097da123cafac99cd5a7f48181296f26f4576687b74b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd2949a57788903c9eb1d2cb9a0a9f82f17c53773f13533d3f97078493c2e456/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: W0216 17:41:08.562647   13518 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5dc01af2fba93130acf2ad355654dfd0a9a6cc4367ad9edb453adcc62db652d7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: W0216 17:41:08.564221   13518 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ecb09a7415f190c681f0ea8c08b55e489174009536086a2236bce30f6a9ebfb2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: W0216 17:41:08.599703   13518 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: time="2024-02-16T17:41:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82eb15bb606b24de567db3c73c76e23341784389dee7bbbdfc7b8bd4e4dbbd7d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 16 17:41:08 kubernetes-upgrade-089000 cri-dockerd[13518]: W0216 17:41:08.651686   13518 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55bf5386ffee2       4270645ed6b7a       8 seconds ago       Running             kube-scheduler            2                   ecb09a7415f19       kube-scheduler-kubernetes-upgrade-089000
	a41699fa9dbd7       d4e01cdf63970       8 seconds ago       Running             kube-controller-manager   2                   5dc01af2fba93       kube-controller-manager-kubernetes-upgrade-089000
	1ae782d32291f       bbb47a0f83324       8 seconds ago       Running             kube-apiserver            2                   82eb15bb606b2       kube-apiserver-kubernetes-upgrade-089000
	3ed286022f8d1       a0eed15eed449       8 seconds ago       Running             etcd                      2                   bd2949a577889       etcd-kubernetes-upgrade-089000
	7338c03f00e8a       a0eed15eed449       16 seconds ago      Exited              etcd                      1                   9b742f05c73ad       etcd-kubernetes-upgrade-089000
	f9f5c860e04d0       bbb47a0f83324       16 seconds ago      Exited              kube-apiserver            1                   802d6649ba38b       kube-apiserver-kubernetes-upgrade-089000
	4d4c459414ac2       d4e01cdf63970       16 seconds ago      Exited              kube-controller-manager   1                   31d4f85e5bead       kube-controller-manager-kubernetes-upgrade-089000
	9a9002bd67d80       4270645ed6b7a       16 seconds ago      Exited              kube-scheduler            1                   2c050ffceeeff       kube-scheduler-kubernetes-upgrade-089000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-089000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-089000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9
	                    minikube.k8s.io/name=kubernetes-upgrade-089000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_16T09_40_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Feb 2024 17:40:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-089000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Feb 2024 17:41:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Feb 2024 17:41:14 +0000   Fri, 16 Feb 2024 17:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Feb 2024 17:41:14 +0000   Fri, 16 Feb 2024 17:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Feb 2024 17:41:14 +0000   Fri, 16 Feb 2024 17:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Feb 2024 17:41:14 +0000   Fri, 16 Feb 2024 17:40:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-089000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	System Info:
	  Machine ID:                 e52ffe2f70524a6885ea643203f80df6
	  System UUID:                e52ffe2f70524a6885ea643203f80df6
	  Boot ID:                    2fdb4e59-5394-4b60-90d3-5bb0e84fcd74
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-089000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kube-apiserver-kubernetes-upgrade-089000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-089000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-kubernetes-upgrade-089000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 45s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 39s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                35s                kubelet  Node kubernetes-upgrade-089000 status is now: NodeReady
	  Normal  Starting                 10s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 10s)   kubelet  Node kubernetes-upgrade-089000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	
	
	==> etcd [3ed286022f8d] <==
	{"level":"info","ts":"2024-02-16T17:41:10.65176Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:41:10.651807Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:41:10.652034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-16T17:41:10.652151Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-16T17:41:10.652361Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:41:10.652424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:41:10.65854Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-16T17:41:10.659863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:41:10.660296Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:41:10.660792Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-16T17:41:10.660819Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-16T17:41:12.379252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:12.37933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:12.37935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:12.379364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-02-16T17:41:12.379369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-16T17:41:12.379376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-02-16T17:41:12.379381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-16T17:41:12.380568Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-089000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:41:12.380906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:41:12.381109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:41:12.381328Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:41:12.38139Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:41:12.385669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-16T17:41:12.386427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [7338c03f00e8] <==
	{"level":"info","ts":"2024-02-16T17:41:03.349282Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-16T17:41:04.268173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-16T17:41:04.268223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-16T17:41:04.268247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-16T17:41:04.26826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:04.268265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:04.268272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:04.268278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-16T17:41:04.269534Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-089000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:41:04.269764Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:41:04.26993Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:41:04.270214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:41:04.270259Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:41:04.273942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:41:04.276319Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-16T17:41:07.333807Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-16T17:41:07.334127Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-089000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-02-16T17:41:07.334273Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:41:07.334354Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:41:07.353449Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:41:07.353509Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-16T17:41:07.353586Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-02-16T17:41:07.356374Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:41:07.356593Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-16T17:41:07.356652Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-089000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> kernel <==
	 17:41:19 up  1:00,  0 users,  load average: 7.92, 5.94, 5.20
	Linux kubernetes-upgrade-089000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [1ae782d32291] <==
	I0216 17:41:13.878820       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0216 17:41:13.878915       1 aggregator.go:163] waiting for initial CRD sync...
	I0216 17:41:13.878942       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0216 17:41:13.878956       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0216 17:41:13.878961       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0216 17:41:13.960146       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0216 17:41:14.046421       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0216 17:41:14.046503       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0216 17:41:14.046437       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0216 17:41:14.046467       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0216 17:41:14.046683       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0216 17:41:14.046782       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0216 17:41:14.046797       1 aggregator.go:165] initial CRD sync complete...
	I0216 17:41:14.046813       1 autoregister_controller.go:141] Starting autoregister controller
	I0216 17:41:14.046819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0216 17:41:14.046827       1 cache.go:39] Caches are synced for autoregister controller
	I0216 17:41:14.046832       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0216 17:41:14.046987       1 shared_informer.go:318] Caches are synced for configmaps
	E0216 17:41:14.053710       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0216 17:41:14.881543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0216 17:41:15.682363       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0216 17:41:15.689143       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0216 17:41:15.718038       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0216 17:41:15.738098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0216 17:41:15.743833       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [f9f5c860e04d] <==
	W0216 17:41:07.350754       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351680       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351687       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351730       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351107       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351797       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351827       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351841       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351861       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351903       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352102       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352174       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352186       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352246       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.351744       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.350967       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352464       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352565       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352634       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:41:07.352807       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0216 17:41:07.352866       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:41:07.352889       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:41:07.352923       1 controller.go:178] quota evaluator worker shutdown
	I0216 17:41:07.353054       1 controller.go:178] quota evaluator worker shutdown
	W0216 17:41:07.353170       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4d4c459414ac] <==
	I0216 17:41:03.775278       1 serving.go:380] Generated self-signed cert in-memory
	I0216 17:41:04.225874       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0216 17:41:04.225916       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:41:04.227211       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0216 17:41:04.227532       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0216 17:41:04.227604       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:41:04.227649       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [a41699fa9dbd] <==
	I0216 17:41:16.012074       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0216 17:41:16.015667       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0216 17:41:16.015850       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	E0216 17:41:16.018914       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0216 17:41:16.018955       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0216 17:41:16.028417       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0216 17:41:16.028708       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0216 17:41:16.028771       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0216 17:41:16.038268       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0216 17:41:16.038616       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0216 17:41:16.038627       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0216 17:41:16.048749       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0216 17:41:16.049034       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0216 17:41:16.049067       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0216 17:41:16.058839       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0216 17:41:16.058949       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0216 17:41:16.058959       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0216 17:41:16.058863       1 shared_informer.go:318] Caches are synced for tokens
	I0216 17:41:16.061975       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0216 17:41:16.062190       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0216 17:41:16.062227       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0216 17:41:16.072478       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0216 17:41:16.072598       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0216 17:41:16.072611       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0216 17:41:16.072622       1 shared_informer.go:318] Caches are synced for token_cleaner
	
	
	==> kube-scheduler [55bf5386ffee] <==
	I0216 17:41:11.622551       1 serving.go:380] Generated self-signed cert in-memory
	I0216 17:41:14.050623       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:41:14.050668       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:41:14.055506       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0216 17:41:14.055678       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0216 17:41:14.055689       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0216 17:41:14.055954       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:41:14.056186       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0216 17:41:14.056204       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0216 17:41:14.056795       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:41:14.056804       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:41:14.157392       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:41:14.157429       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0216 17:41:14.157725       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kube-scheduler [9a9002bd67d8] <==
	I0216 17:41:03.928027       1 serving.go:380] Generated self-signed cert in-memory
	W0216 17:41:05.475148       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0216 17:41:05.475173       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0216 17:41:05.475185       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0216 17:41:05.475194       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0216 17:41:05.557206       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:41:05.557251       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:41:05.558743       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0216 17:41:05.558859       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:41:05.558871       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:41:05.558882       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:41:05.659166       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:41:07.335647       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0216 17:41:07.336486       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:41:07.346402       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0216 17:41:07.346550       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.155342   14742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5adf71031c11efd586fc590ea5a9ef8a7b9575252b1e06b2f416e104a7e0474"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.155353   14742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c050ffceeefffd25ddbecfa5748f2c622a95e3dc6ce5e05f829cc465b1ebb24"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.155358   14742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322df95b6ebb8788a9c737bafce87bf1bf5600b26c88f183ef84d9e29a6c2d6c"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.155365   14742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03184a7a0e8e2e72723fc3bb2c1cdf3b7c1f453ee298f0287a5187e0c645722d"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.185130   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/435c535cdbd7dc091f2b50cec8873a25-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-089000\" (UID: \"435c535cdbd7dc091f2b50cec8873a25\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.185220   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/012794a7d84a8e3005c5f220f959d43b-etcd-data\") pod \"etcd-kubernetes-upgrade-089000\" (UID: \"012794a7d84a8e3005c5f220f959d43b\") " pod="kube-system/etcd-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.185249   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435c535cdbd7dc091f2b50cec8873a25-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-089000\" (UID: \"435c535cdbd7dc091f2b50cec8873a25\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.185271   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435c535cdbd7dc091f2b50cec8873a25-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-089000\" (UID: \"435c535cdbd7dc091f2b50cec8873a25\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.185286   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/012794a7d84a8e3005c5f220f959d43b-etcd-certs\") pod \"etcd-kubernetes-upgrade-089000\" (UID: \"012794a7d84a8e3005c5f220f959d43b\") " pod="kube-system/etcd-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.258929   14742 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: E0216 17:41:10.259431   14742 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.286370   14742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3caafda8d6b85ef3d3f02a72ddb7480e-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-089000\" (UID: \"3caafda8d6b85ef3d3f02a72ddb7480e\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.374399   14742 scope.go:117] "RemoveContainer" containerID="7338c03f00e8af3652524a302f471d5d5f7a0ae0d8006cbfb959b14fe644e4b1"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.384540   14742 scope.go:117] "RemoveContainer" containerID="f9f5c860e04d0729831c097da123cafac99cd5a7f48181296f26f4576687b74b"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.455577   14742 scope.go:117] "RemoveContainer" containerID="4d4c459414ac2b4de49212ccacb505a9680c6064f33cedc9c8b3bfc07bb22513"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.464422   14742 scope.go:117] "RemoveContainer" containerID="9a9002bd67d809adc7c67f9ba142e99a2493b532ff3653d70e169998a9f40204"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: E0216 17:41:10.485579   14742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-089000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:10.669262   14742 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-089000"
	Feb 16 17:41:10 kubernetes-upgrade-089000 kubelet[14742]: E0216 17:41:10.669752   14742 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-089000"
	Feb 16 17:41:11 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:11.480971   14742 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-089000"
	Feb 16 17:41:14 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:14.056859   14742 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-089000"
	Feb 16 17:41:14 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:14.056994   14742 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-089000"
	Feb 16 17:41:14 kubernetes-upgrade-089000 kubelet[14742]: E0216 17:41:14.804194   14742 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-089000\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-089000"
	Feb 16 17:41:14 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:14.876252   14742 apiserver.go:52] "Watching apiserver"
	Feb 16 17:41:14 kubernetes-upgrade-089000 kubelet[14742]: I0216 17:41:14.884193   14742 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-089000 -n kubernetes-upgrade-089000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-089000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-089000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-089000 describe pod storage-provisioner: exit status 1 (62.330819ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-089000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-089000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-089000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-089000: (2.705347616s)
--- FAIL: TestKubernetesUpgrade (578.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (257.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0216 09:45:55.269421    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m16.921422685s)

                                                
                                                
-- stdout --
	* [old-k8s-version-356000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-356000 in cluster old-k8s-version-356000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:45:52.316481   18426 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:45:52.316867   18426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:45:52.316877   18426 out.go:304] Setting ErrFile to fd 2...
	I0216 09:45:52.316884   18426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:45:52.317201   18426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:45:52.320165   18426 out.go:298] Setting JSON to false
	I0216 09:45:52.345401   18426 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4523,"bootTime":1708101029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:45:52.345521   18426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:45:52.367685   18426 out.go:177] * [old-k8s-version-356000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:45:52.426495   18426 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:45:52.426576   18426 notify.go:220] Checking for updates...
	I0216 09:45:52.469954   18426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:45:52.491290   18426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:45:52.512462   18426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:45:52.534253   18426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:45:52.555193   18426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:45:52.576793   18426 config.go:182] Loaded profile config "kubenet-862000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:45:52.576930   18426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:45:52.633248   18426 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:45:52.633415   18426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:45:52.746659   18426 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:45:52.735654455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:45:52.789314   18426 out.go:177] * Using the docker driver based on user configuration
	I0216 09:45:52.810354   18426 start.go:299] selected driver: docker
	I0216 09:45:52.810368   18426 start.go:903] validating driver "docker" against <nil>
	I0216 09:45:52.810378   18426 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:45:52.814000   18426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:45:52.935066   18426 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:45:52.92486987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:45:52.935277   18426 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 09:45:52.935455   18426 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 09:45:52.956723   18426 out.go:177] * Using Docker Desktop driver with root privileges
	I0216 09:45:52.993913   18426 cni.go:84] Creating CNI manager for ""
	I0216 09:45:52.993942   18426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:45:52.993954   18426 start_flags.go:323] config:
	{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:45:53.003212   18426 out.go:177] * Starting control plane node old-k8s-version-356000 in cluster old-k8s-version-356000
	I0216 09:45:53.061630   18426 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:45:53.082734   18426 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:45:53.124609   18426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:45:53.124675   18426 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:45:53.124739   18426 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 09:45:53.124787   18426 cache.go:56] Caching tarball of preloaded images
	I0216 09:45:53.125101   18426 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:45:53.125653   18426 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 09:45:53.126092   18426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0216 09:45:53.126137   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/config.json: {Name:mkfc113630e7ad903ada6e2a9851d3843fb77e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:45:53.197786   18426 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:45:53.197903   18426 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:45:53.197923   18426 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:45:53.198168   18426 start.go:365] acquiring machines lock for old-k8s-version-356000: {Name:mkcbb668d74284a5583a7ae9844b8f225578b58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:45:53.198369   18426 start.go:369] acquired machines lock for "old-k8s-version-356000" in 186.538µs
	I0216 09:45:53.198404   18426 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 09:45:53.198473   18426 start.go:125] createHost starting for "" (driver="docker")
	I0216 09:45:53.242790   18426 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 09:45:53.243040   18426 start.go:159] libmachine.API.Create for "old-k8s-version-356000" (driver="docker")
	I0216 09:45:53.243072   18426 client.go:168] LocalClient.Create starting
	I0216 09:45:53.243195   18426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem
	I0216 09:45:53.243245   18426 main.go:141] libmachine: Decoding PEM data...
	I0216 09:45:53.243262   18426 main.go:141] libmachine: Parsing certificate...
	I0216 09:45:53.243319   18426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem
	I0216 09:45:53.243357   18426 main.go:141] libmachine: Decoding PEM data...
	I0216 09:45:53.243377   18426 main.go:141] libmachine: Parsing certificate...
	I0216 09:45:53.243822   18426 cli_runner.go:164] Run: docker network inspect old-k8s-version-356000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 09:45:53.375326   18426 cli_runner.go:211] docker network inspect old-k8s-version-356000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 09:45:53.375456   18426 network_create.go:281] running [docker network inspect old-k8s-version-356000] to gather additional debugging logs...
	I0216 09:45:53.375472   18426 cli_runner.go:164] Run: docker network inspect old-k8s-version-356000
	W0216 09:45:53.426776   18426 cli_runner.go:211] docker network inspect old-k8s-version-356000 returned with exit code 1
	I0216 09:45:53.426805   18426 network_create.go:284] error running [docker network inspect old-k8s-version-356000]: docker network inspect old-k8s-version-356000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-356000 not found
	I0216 09:45:53.426816   18426 network_create.go:286] output of [docker network inspect old-k8s-version-356000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-356000 not found
	
	** /stderr **
	I0216 09:45:53.426967   18426 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 09:45:53.478271   18426 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:45:53.479894   18426 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:45:53.481479   18426 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0216 09:45:53.481832   18426 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002368540}
	I0216 09:45:53.481851   18426 network_create.go:124] attempt to create docker network old-k8s-version-356000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0216 09:45:53.481917   18426 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-356000 old-k8s-version-356000
	I0216 09:45:53.568543   18426 network_create.go:108] docker network old-k8s-version-356000 192.168.76.0/24 created
	I0216 09:45:53.568581   18426 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-356000" container
	I0216 09:45:53.568698   18426 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 09:45:53.620288   18426 cli_runner.go:164] Run: docker volume create old-k8s-version-356000 --label name.minikube.sigs.k8s.io=old-k8s-version-356000 --label created_by.minikube.sigs.k8s.io=true
	I0216 09:45:53.671601   18426 oci.go:103] Successfully created a docker volume old-k8s-version-356000
	I0216 09:45:53.671710   18426 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-356000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-356000 --entrypoint /usr/bin/test -v old-k8s-version-356000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 09:45:54.138024   18426 oci.go:107] Successfully prepared a docker volume old-k8s-version-356000
	I0216 09:45:54.138079   18426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:45:54.138092   18426 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 09:45:54.138187   18426 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-356000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 09:45:56.343354   18426 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-356000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.205023157s)
	I0216 09:45:56.343385   18426 kic.go:203] duration metric: took 2.205249 seconds to extract preloaded images to volume
	I0216 09:45:56.343548   18426 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 09:45:56.472077   18426 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-356000 --name old-k8s-version-356000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-356000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-356000 --network old-k8s-version-356000 --ip 192.168.76.2 --volume old-k8s-version-356000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 09:45:56.795052   18426 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Running}}
	I0216 09:45:56.857179   18426 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Status}}
	I0216 09:45:56.917226   18426 cli_runner.go:164] Run: docker exec old-k8s-version-356000 stat /var/lib/dpkg/alternatives/iptables
	I0216 09:45:57.027377   18426 oci.go:144] the created container "old-k8s-version-356000" has a running status.
	I0216 09:45:57.027418   18426 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa...
	I0216 09:45:57.117840   18426 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 09:45:57.185603   18426 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Status}}
	I0216 09:45:57.245021   18426 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 09:45:57.245056   18426 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-356000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 09:45:57.366977   18426 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Status}}
	I0216 09:45:57.427798   18426 machine.go:88] provisioning docker machine ...
	I0216 09:45:57.427857   18426 ubuntu.go:169] provisioning hostname "old-k8s-version-356000"
	I0216 09:45:57.427980   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:57.486888   18426 main.go:141] libmachine: Using SSH client type: native
	I0216 09:45:57.487207   18426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53808 <nil> <nil>}
	I0216 09:45:57.487219   18426 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-356000 && echo "old-k8s-version-356000" | sudo tee /etc/hostname
	I0216 09:45:57.647650   18426 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356000
	
	I0216 09:45:57.647763   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:57.700626   18426 main.go:141] libmachine: Using SSH client type: native
	I0216 09:45:57.700914   18426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53808 <nil> <nil>}
	I0216 09:45:57.700932   18426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-356000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-356000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-356000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:45:57.837383   18426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:45:57.837419   18426 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:45:57.837442   18426 ubuntu.go:177] setting up certificates
	I0216 09:45:57.837455   18426 provision.go:83] configureAuth start
	I0216 09:45:57.837556   18426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:45:57.890257   18426 provision.go:138] copyHostCerts
	I0216 09:45:57.890373   18426 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:45:57.890386   18426 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:45:57.890522   18426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:45:57.890766   18426 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:45:57.890773   18426 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:45:57.890850   18426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:45:57.891029   18426 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:45:57.891037   18426 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:45:57.891111   18426 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:45:57.891279   18426 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-356000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-356000]
	I0216 09:45:57.956629   18426 provision.go:172] copyRemoteCerts
	I0216 09:45:57.956705   18426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:45:57.956768   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:58.010018   18426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53808 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:45:58.110967   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:45:58.151852   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 09:45:58.193106   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:45:58.233909   18426 provision.go:86] duration metric: configureAuth took 396.430861ms
	I0216 09:45:58.233925   18426 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:45:58.234062   18426 config.go:182] Loaded profile config "old-k8s-version-356000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 09:45:58.234132   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:58.289844   18426 main.go:141] libmachine: Using SSH client type: native
	I0216 09:45:58.290164   18426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53808 <nil> <nil>}
	I0216 09:45:58.290183   18426 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:45:58.427624   18426 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:45:58.427642   18426 ubuntu.go:71] root file system type: overlay
	I0216 09:45:58.427757   18426 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:45:58.427843   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:58.480583   18426 main.go:141] libmachine: Using SSH client type: native
	I0216 09:45:58.480912   18426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53808 <nil> <nil>}
	I0216 09:45:58.480960   18426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:45:58.639357   18426 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:45:58.639469   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:58.693191   18426 main.go:141] libmachine: Using SSH client type: native
	I0216 09:45:58.693484   18426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53808 <nil> <nil>}
	I0216 09:45:58.693497   18426 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:45:59.356804   18426 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:45:58.634389397 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 09:45:59.356823   18426 machine.go:91] provisioned docker machine in 1.928951734s
	I0216 09:45:59.356830   18426 client.go:171] LocalClient.Create took 6.113632302s
	I0216 09:45:59.356850   18426 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-356000" took 6.11369231s
	I0216 09:45:59.356861   18426 start.go:300] post-start starting for "old-k8s-version-356000" (driver="docker")
	I0216 09:45:59.356869   18426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:45:59.356938   18426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:45:59.356999   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:59.410326   18426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53808 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:45:59.513617   18426 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:45:59.517619   18426 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:45:59.517647   18426 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:45:59.517655   18426 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:45:59.517661   18426 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:45:59.517672   18426 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:45:59.517778   18426 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:45:59.517967   18426 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:45:59.518167   18426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:45:59.533673   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:45:59.574739   18426 start.go:303] post-start completed in 217.862189ms
	I0216 09:45:59.575328   18426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:45:59.626216   18426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0216 09:45:59.626704   18426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:45:59.626763   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:59.678570   18426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53808 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:45:59.769477   18426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:45:59.775092   18426 start.go:128] duration metric: createHost completed in 6.576470429s
	I0216 09:45:59.775113   18426 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 6.576604434s
	I0216 09:45:59.775215   18426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:45:59.833337   18426 ssh_runner.go:195] Run: cat /version.json
	I0216 09:45:59.833343   18426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:45:59.833415   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:59.833468   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:45:59.894814   18426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53808 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:45:59.894840   18426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53808 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:46:00.092612   18426 ssh_runner.go:195] Run: systemctl --version
	I0216 09:46:00.097398   18426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 09:46:00.102514   18426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 09:46:00.143185   18426 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 09:46:00.143257   18426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 09:46:00.171242   18426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 09:46:00.199884   18426 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 09:46:00.199902   18426 start.go:475] detecting cgroup driver to use...
	I0216 09:46:00.199919   18426 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:46:00.200016   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:46:00.227593   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 09:46:00.243923   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:46:00.259850   18426 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:46:00.259905   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:46:00.277152   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:46:00.295187   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:46:00.312201   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:46:00.334101   18426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:46:00.351121   18426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:46:00.367311   18426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:46:00.383090   18426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:46:00.398215   18426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:46:00.463708   18426 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:46:00.556201   18426 start.go:475] detecting cgroup driver to use...
	I0216 09:46:00.556222   18426 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:46:00.556298   18426 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:46:00.576310   18426 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:46:00.576414   18426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:46:00.596454   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:46:00.630643   18426 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:46:00.635483   18426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:46:00.651849   18426 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:46:00.682029   18426 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:46:00.753293   18426 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:46:00.856329   18426 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:46:00.856426   18426 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:46:00.885904   18426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:46:00.950945   18426 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:46:01.207156   18426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:46:01.232320   18426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:46:01.301971   18426 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 09:46:01.302079   18426 cli_runner.go:164] Run: docker exec -t old-k8s-version-356000 dig +short host.docker.internal
	I0216 09:46:01.427082   18426 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:46:01.427174   18426 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:46:01.431782   18426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:46:01.449333   18426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:46:01.502090   18426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:46:01.502174   18426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:46:01.523335   18426 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:46:01.523349   18426 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:46:01.523421   18426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:46:01.538593   18426 ssh_runner.go:195] Run: which lz4
	I0216 09:46:01.543388   18426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 09:46:01.547613   18426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 09:46:01.547635   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 09:46:07.790983   18426 docker.go:649] Took 6.247516 seconds to copy over tarball
	I0216 09:46:07.791070   18426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 09:46:09.515434   18426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.724312958s)
	I0216 09:46:09.515449   18426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 09:46:09.567297   18426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:46:09.582704   18426 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 09:46:09.615348   18426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:46:09.678987   18426 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:46:10.325388   18426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:46:10.346668   18426 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:46:10.346680   18426 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:46:10.346690   18426 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 09:46:10.351745   18426 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:46:10.351930   18426 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 09:46:10.352151   18426 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:46:10.352448   18426 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:46:10.353303   18426 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:46:10.353805   18426 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:46:10.353895   18426 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:46:10.353911   18426 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 09:46:10.355855   18426 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 09:46:10.358680   18426 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:46:10.358854   18426 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:46:10.358956   18426 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:46:10.359114   18426 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:46:10.361634   18426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:46:10.362090   18426 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 09:46:10.362166   18426 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:46:12.251831   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:46:12.278620   18426 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 09:46:12.278661   18426 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:46:12.278729   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:46:12.297991   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 09:46:12.318032   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:46:12.356956   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:46:12.362074   18426 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 09:46:12.362103   18426 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:46:12.362168   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:46:12.379322   18426 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 09:46:12.379356   18426 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:46:12.379436   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:46:12.384375   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 09:46:12.386412   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 09:46:12.394961   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 09:46:12.401759   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 09:46:12.406407   18426 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 09:46:12.406432   18426 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 09:46:12.406494   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 09:46:12.411327   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:46:12.418419   18426 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 09:46:12.418458   18426 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 09:46:12.418529   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 09:46:12.419070   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 09:46:12.433610   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 09:46:12.442124   18426 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 09:46:12.442184   18426 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:46:12.442268   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:46:12.450077   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 09:46:12.451233   18426 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 09:46:12.451258   18426 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:46:12.451328   18426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 09:46:12.508668   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 09:46:12.514457   18426 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 09:46:12.717186   18426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:46:12.738469   18426 cache_images.go:92] LoadImages completed in 2.391719777s
	W0216 09:46:12.738524   18426 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0216 09:46:12.738600   18426 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:46:12.791434   18426 cni.go:84] Creating CNI manager for ""
	I0216 09:46:12.791453   18426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:46:12.791466   18426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:46:12.791483   18426 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-356000 NodeName:old-k8s-version-356000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 09:46:12.791570   18426 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-356000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-356000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:46:12.791622   18426 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-356000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:46:12.791712   18426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 09:46:12.806824   18426 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:46:12.806895   18426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:46:12.821340   18426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 09:46:12.849560   18426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 09:46:12.879110   18426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 09:46:12.911998   18426 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:46:12.917393   18426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:46:12.944213   18426 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000 for IP: 192.168.76.2
	I0216 09:46:12.944268   18426 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:12.944540   18426 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:46:12.944646   18426 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:46:12.944698   18426 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.key
	I0216 09:46:12.944711   18426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.crt with IP's: []
	I0216 09:46:13.048986   18426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.crt ...
	I0216 09:46:13.049001   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.crt: {Name:mka7fc7da421879d5f99f6bc7a1e1055a9f679a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.049301   18426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.key ...
	I0216 09:46:13.049325   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.key: {Name:mk6dbf4477e491adb529d92a6d504aabd983d6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.049547   18426 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key.31bdca25
	I0216 09:46:13.049561   18426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 09:46:13.186832   18426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt.31bdca25 ...
	I0216 09:46:13.186846   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt.31bdca25: {Name:mk293dd0e774fc295fa25febb20d6dc0842fee35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.187150   18426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key.31bdca25 ...
	I0216 09:46:13.187160   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key.31bdca25: {Name:mk96fc7d316f9527d3f8571aa727fa0dab5f1c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.187361   18426 certs.go:337] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt
	I0216 09:46:13.187535   18426 certs.go:341] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key
	I0216 09:46:13.187700   18426 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key
	I0216 09:46:13.187721   18426 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.crt with IP's: []
	I0216 09:46:13.315751   18426 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.crt ...
	I0216 09:46:13.315764   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.crt: {Name:mkf231c5ceb7687b1415f9d80d570c0f7d7c4acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.341124   18426 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key ...
	I0216 09:46:13.341155   18426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key: {Name:mkb2f4eaf4664e8f738301025f0fdeb79ddb9bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:46:13.364309   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:46:13.364396   18426 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:46:13.364415   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:46:13.364445   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:46:13.364476   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:46:13.364511   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:46:13.364581   18426 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:46:13.365091   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:46:13.406076   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 09:46:13.448308   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:46:13.488101   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 09:46:13.530546   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:46:13.571162   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:46:13.612992   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:46:13.652990   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:46:13.693519   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:46:13.734264   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:46:13.774196   18426 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:46:13.814625   18426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:46:13.843583   18426 ssh_runner.go:195] Run: openssl version
	I0216 09:46:13.850782   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:46:13.866387   18426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:46:13.870825   18426 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:46:13.870883   18426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:46:13.878124   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:46:13.894105   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:46:13.910672   18426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:46:13.915128   18426 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:46:13.915186   18426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:46:13.922106   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:46:13.939006   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:46:13.955137   18426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:46:13.959595   18426 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:46:13.959647   18426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:46:13.966606   18426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:46:13.982192   18426 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:46:13.986433   18426 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 09:46:13.986490   18426 kubeadm.go:404] StartCluster: {Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:46:13.986589   18426 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:46:14.006591   18426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:46:14.021889   18426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:46:14.037588   18426 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:46:14.037652   18426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:46:14.052858   18426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:46:14.052885   18426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:46:14.111974   18426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:46:14.112012   18426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:46:14.378386   18426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:46:14.378506   18426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:46:14.378610   18426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:46:14.569100   18426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:46:14.596824   18426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:46:14.603457   18426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:46:14.670318   18426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:46:14.693002   18426 out.go:204]   - Generating certificates and keys ...
	I0216 09:46:14.693154   18426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:46:14.693208   18426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:46:14.954199   18426 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 09:46:15.203116   18426 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 09:46:15.445856   18426 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 09:46:15.581305   18426 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 09:46:15.641921   18426 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 09:46:15.642073   18426 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 09:46:15.751728   18426 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 09:46:15.751890   18426 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 09:46:15.883425   18426 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 09:46:16.079878   18426 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 09:46:16.245321   18426 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 09:46:16.245403   18426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:46:16.356809   18426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:46:16.550467   18426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:46:16.688475   18426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:46:16.819120   18426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:46:16.820085   18426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:46:16.846239   18426 out.go:204]   - Booting up control plane ...
	I0216 09:46:16.846332   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:46:16.846423   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:46:16.846510   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:46:16.846601   18426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:46:16.846784   18426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:46:56.830617   18426 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:46:56.831037   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:46:56.831215   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:47:01.832175   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:47:01.832408   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:47:11.834080   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:47:11.834255   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:47:31.835663   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:47:31.835813   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:48:11.839484   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:48:11.839714   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:48:11.839733   18426 kubeadm.go:322] 
	I0216 09:48:11.839791   18426 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:48:11.839839   18426 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:48:11.839847   18426 kubeadm.go:322] 
	I0216 09:48:11.839911   18426 kubeadm.go:322] This error is likely caused by:
	I0216 09:48:11.839943   18426 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:48:11.840044   18426 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:48:11.840055   18426 kubeadm.go:322] 
	I0216 09:48:11.840127   18426 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:48:11.840172   18426 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:48:11.840203   18426 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:48:11.840210   18426 kubeadm.go:322] 
	I0216 09:48:11.840299   18426 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:48:11.840374   18426 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:48:11.840440   18426 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:48:11.840480   18426 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:48:11.840544   18426 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:48:11.840570   18426 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:48:11.844675   18426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:48:11.844758   18426 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:48:11.844858   18426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:48:11.844962   18426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:48:11.845031   18426 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:48:11.845093   18426 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 09:48:11.845172   18426 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-356000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 09:48:11.845216   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:48:12.262174   18426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:48:12.279300   18426 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:48:12.279361   18426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:48:12.294464   18426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:48:12.294500   18426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:48:12.347101   18426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:48:12.347141   18426 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:48:12.574546   18426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:48:12.574658   18426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:48:12.574750   18426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:48:12.729920   18426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:48:12.730651   18426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:48:12.737053   18426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:48:12.799801   18426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:48:12.821323   18426 out.go:204]   - Generating certificates and keys ...
	I0216 09:48:12.821454   18426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:48:12.821525   18426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:48:12.821621   18426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:48:12.821688   18426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:48:12.821780   18426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:48:12.821838   18426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:48:12.821906   18426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:48:12.821979   18426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:48:12.822037   18426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:48:12.822157   18426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:48:12.822258   18426 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:48:12.822352   18426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:48:13.017676   18426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:48:13.238901   18426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:48:13.552141   18426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:48:13.592668   18426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:48:13.593346   18426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:48:13.614911   18426 out.go:204]   - Booting up control plane ...
	I0216 09:48:13.615008   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:48:13.615096   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:48:13.615185   18426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:48:13.615270   18426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:48:13.615439   18426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:48:53.603282   18426 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:48:53.603593   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:48:53.603799   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:48:58.605252   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:48:58.605392   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:49:08.606399   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:49:08.606556   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:49:28.608224   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:49:28.608373   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:50:08.611752   18426 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:50:08.611907   18426 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:50:08.611915   18426 kubeadm.go:322] 
	I0216 09:50:08.611944   18426 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:50:08.611983   18426 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:50:08.611994   18426 kubeadm.go:322] 
	I0216 09:50:08.612020   18426 kubeadm.go:322] This error is likely caused by:
	I0216 09:50:08.612049   18426 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:50:08.612141   18426 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:50:08.612153   18426 kubeadm.go:322] 
	I0216 09:50:08.612230   18426 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:50:08.612257   18426 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:50:08.612279   18426 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:50:08.612285   18426 kubeadm.go:322] 
	I0216 09:50:08.612364   18426 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:50:08.612449   18426 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:50:08.612522   18426 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:50:08.612564   18426 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:50:08.612628   18426 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:50:08.612661   18426 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:50:08.616485   18426 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:50:08.616546   18426 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:50:08.616646   18426 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:50:08.616729   18426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:50:08.616800   18426 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:50:08.616856   18426 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 09:50:08.616904   18426 kubeadm.go:406] StartCluster complete in 3m54.625836035s
	I0216 09:50:08.616987   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:50:08.633359   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.633372   18426 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:50:08.633447   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:50:08.652092   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.652106   18426 logs.go:278] No container was found matching "etcd"
	I0216 09:50:08.652175   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:50:08.669245   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.669259   18426 logs.go:278] No container was found matching "coredns"
	I0216 09:50:08.669332   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:50:08.686142   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.686158   18426 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:50:08.686226   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:50:08.703601   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.703615   18426 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:50:08.703685   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:50:08.720435   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.720486   18426 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:50:08.720603   18426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:50:08.737690   18426 logs.go:276] 0 containers: []
	W0216 09:50:08.737704   18426 logs.go:278] No container was found matching "kindnet"
	I0216 09:50:08.737712   18426 logs.go:123] Gathering logs for kubelet ...
	I0216 09:50:08.737720   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:50:08.784335   18426 logs.go:123] Gathering logs for dmesg ...
	I0216 09:50:08.784355   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:50:08.820035   18426 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:50:08.820053   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:50:08.907213   18426 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:50:08.907227   18426 logs.go:123] Gathering logs for Docker ...
	I0216 09:50:08.907241   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:50:08.929671   18426 logs.go:123] Gathering logs for container status ...
	I0216 09:50:08.929686   18426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 09:50:08.991803   18426 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 09:50:08.991827   18426 out.go:239] * 
	* 
	W0216 09:50:08.991867   18426 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:50:08.991900   18426 out.go:239] * 
	* 
	W0216 09:50:08.992531   18426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 09:50:09.056346   18426 out.go:177] 
	W0216 09:50:09.099071   18426 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 09:50:09.099120   18426 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 09:50:09.099140   18426 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 09:50:09.120236   18426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:45:56.786241358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "316a645e6070a04df20f8a3c2b5778a2943dd3ff2014cf5c036ba10fc84a1550",
	            "SandboxKey": "/var/run/docker/netns/316a645e6070",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "bdb22bb2035679a82685e03e38fabe1078df25fe59cde9a2fd156de399fe1054",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 6 (435.127395ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:50:09.694705   19295 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-356000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (257.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml: exit status 1 (41.01475ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-356000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-356000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:45:56.786241358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "316a645e6070a04df20f8a3c2b5778a2943dd3ff2014cf5c036ba10fc84a1550",
	            "SandboxKey": "/var/run/docker/netns/316a645e6070",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "bdb22bb2035679a82685e03e38fabe1078df25fe59cde9a2fd156de399fe1054",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 6 (402.222333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:50:10.199391   19308 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-356000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:45:56.786241358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "316a645e6070a04df20f8a3c2b5778a2943dd3ff2014cf5c036ba10fc84a1550",
	            "SandboxKey": "/var/run/docker/netns/316a645e6070",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "bdb22bb2035679a82685e03e38fabe1078df25fe59cde9a2fd156de399fe1054",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 6 (407.173561ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:50:10.660467   19320 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-356000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-356000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0216 09:50:11.933584    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:11.939046    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:11.949468    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:11.969947    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:12.010188    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:12.091447    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:12.252239    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:12.574377    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:13.216008    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:14.496308    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:16.911825    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:50:17.057450    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:22.179768    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:32.421981    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:50:34.793309    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:50:37.392545    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:50:52.903211    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:51:02.477586    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:51:15.123028    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.128641    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.138774    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.159721    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.199860    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.280941    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.441607    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:15.761929    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:16.402084    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:17.177024    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:51:17.682248    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:18.353952    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:51:20.242467    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:25.363096    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:51:30.803654    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:51:33.864252    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:51:35.604989    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-356000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.84570626s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-356000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system: exit status 1 (39.947933ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-356000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-356000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:45:56.786241358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "316a645e6070a04df20f8a3c2b5778a2943dd3ff2014cf5c036ba10fc84a1550",
	            "SandboxKey": "/var/run/docker/netns/316a645e6070",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "bdb22bb2035679a82685e03e38fabe1078df25fe59cde9a2fd156de399fe1054",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 6 (415.061098ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:51:47.016599   19355 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-356000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-356000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (509.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0216 09:51:56.085597    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:52:05.107884    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:52:14.954813    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:52:32.789111    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:52:37.047447    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:52:40.289135    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:52:42.638979    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:52:55.786737    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:53:01.526145    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:53:33.270367    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:53:46.965020    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:53:51.282337    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:53:58.969930    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 09:53:59.741021    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:54:01.020986    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:54:14.648735    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:54:56.433594    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:55:11.939282    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 09:55:24.132576    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m27.076455766s)

                                                
                                                
-- stdout --
	* [old-k8s-version-356000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-356000 in cluster old-k8s-version-356000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-356000" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:51:49.084053   19385 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:51:49.084372   19385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:51:49.084377   19385 out.go:304] Setting ErrFile to fd 2...
	I0216 09:51:49.084381   19385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:51:49.084558   19385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:51:49.086015   19385 out.go:298] Setting JSON to false
	I0216 09:51:49.109332   19385 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4880,"bootTime":1708101029,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:51:49.109437   19385 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:51:49.131560   19385 out.go:177] * [old-k8s-version-356000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:51:49.175245   19385 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:51:49.175304   19385 notify.go:220] Checking for updates...
	I0216 09:51:49.218343   19385 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:51:49.262464   19385 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:51:49.283507   19385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:51:49.311457   19385 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:51:49.332022   19385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:51:49.353565   19385 config.go:182] Loaded profile config "old-k8s-version-356000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 09:51:49.376891   19385 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0216 09:51:49.398327   19385 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:51:49.456185   19385 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:51:49.456333   19385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:51:49.558741   19385 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:51:49.547876861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:51:49.580464   19385 out.go:177] * Using the docker driver based on existing profile
	I0216 09:51:49.601909   19385 start.go:299] selected driver: docker
	I0216 09:51:49.601933   19385 start.go:903] validating driver "docker" against &{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:51:49.602054   19385 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:51:49.606365   19385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:51:49.714321   19385 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:51:49.703317932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:51:49.714583   19385 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 09:51:49.714632   19385 cni.go:84] Creating CNI manager for ""
	I0216 09:51:49.714645   19385 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:51:49.714655   19385 start_flags.go:323] config:
	{Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:51:49.757553   19385 out.go:177] * Starting control plane node old-k8s-version-356000 in cluster old-k8s-version-356000
	I0216 09:51:49.778385   19385 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:51:49.799572   19385 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:51:49.841312   19385 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:51:49.841369   19385 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:51:49.841371   19385 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 09:51:49.841421   19385 cache.go:56] Caching tarball of preloaded images
	I0216 09:51:49.841637   19385 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:51:49.841658   19385 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 09:51:49.842610   19385 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0216 09:51:49.894124   19385 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:51:49.894147   19385 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:51:49.894177   19385 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:51:49.894227   19385 start.go:365] acquiring machines lock for old-k8s-version-356000: {Name:mkcbb668d74284a5583a7ae9844b8f225578b58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:51:49.894332   19385 start.go:369] acquired machines lock for "old-k8s-version-356000" in 83.407µs
	I0216 09:51:49.894369   19385 start.go:96] Skipping create...Using existing machine configuration
	I0216 09:51:49.894380   19385 fix.go:54] fixHost starting: 
	I0216 09:51:49.894614   19385 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Status}}
	I0216 09:51:49.946468   19385 fix.go:102] recreateIfNeeded on old-k8s-version-356000: state=Stopped err=<nil>
	W0216 09:51:49.946502   19385 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 09:51:49.970081   19385 out.go:177] * Restarting existing docker container for "old-k8s-version-356000" ...
	I0216 09:51:50.011718   19385 cli_runner.go:164] Run: docker start old-k8s-version-356000
	I0216 09:51:50.258136   19385 cli_runner.go:164] Run: docker container inspect old-k8s-version-356000 --format={{.State.Status}}
	I0216 09:51:50.318246   19385 kic.go:430] container "old-k8s-version-356000" state is running.
	I0216 09:51:50.318876   19385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:51:50.378645   19385 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/config.json ...
	I0216 09:51:50.379099   19385 machine.go:88] provisioning docker machine ...
	I0216 09:51:50.379129   19385 ubuntu.go:169] provisioning hostname "old-k8s-version-356000"
	I0216 09:51:50.379213   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:50.437764   19385 main.go:141] libmachine: Using SSH client type: native
	I0216 09:51:50.438242   19385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54075 <nil> <nil>}
	I0216 09:51:50.438256   19385 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-356000 && echo "old-k8s-version-356000" | sudo tee /etc/hostname
	I0216 09:51:50.440048   19385 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 09:51:53.600764   19385 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356000
	
	I0216 09:51:53.600848   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:53.653791   19385 main.go:141] libmachine: Using SSH client type: native
	I0216 09:51:53.654104   19385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54075 <nil> <nil>}
	I0216 09:51:53.654117   19385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-356000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-356000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-356000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:51:53.790604   19385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:51:53.790625   19385 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:51:53.790652   19385 ubuntu.go:177] setting up certificates
	I0216 09:51:53.790662   19385 provision.go:83] configureAuth start
	I0216 09:51:53.790732   19385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:51:53.843524   19385 provision.go:138] copyHostCerts
	I0216 09:51:53.843656   19385 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:51:53.843665   19385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:51:53.843817   19385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:51:53.844062   19385 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:51:53.844069   19385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:51:53.844168   19385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:51:53.844354   19385 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:51:53.844360   19385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:51:53.844451   19385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:51:53.844583   19385 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-356000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-356000]
	I0216 09:51:53.920987   19385 provision.go:172] copyRemoteCerts
	I0216 09:51:53.921061   19385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:51:53.921122   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:53.973381   19385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:51:54.075652   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:51:54.116119   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 09:51:54.156616   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:51:54.197098   19385 provision.go:86] duration metric: configureAuth took 406.410859ms
	I0216 09:51:54.197112   19385 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:51:54.197258   19385 config.go:182] Loaded profile config "old-k8s-version-356000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 09:51:54.197331   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:54.249808   19385 main.go:141] libmachine: Using SSH client type: native
	I0216 09:51:54.250105   19385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54075 <nil> <nil>}
	I0216 09:51:54.250114   19385 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:51:54.385880   19385 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:51:54.385901   19385 ubuntu.go:71] root file system type: overlay
	I0216 09:51:54.385986   19385 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:51:54.386066   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:54.439452   19385 main.go:141] libmachine: Using SSH client type: native
	I0216 09:51:54.439745   19385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54075 <nil> <nil>}
	I0216 09:51:54.439798   19385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:51:54.597947   19385 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:51:54.598062   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:54.650890   19385 main.go:141] libmachine: Using SSH client type: native
	I0216 09:51:54.651190   19385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54075 <nil> <nil>}
	I0216 09:51:54.651203   19385 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:51:54.798203   19385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:51:54.798222   19385 machine.go:91] provisioned docker machine in 4.41902783s
	I0216 09:51:54.798234   19385 start.go:300] post-start starting for "old-k8s-version-356000" (driver="docker")
	I0216 09:51:54.798244   19385 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:51:54.798317   19385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:51:54.798391   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:54.850887   19385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:51:54.953838   19385 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:51:54.957928   19385 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:51:54.957953   19385 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:51:54.957961   19385 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:51:54.957966   19385 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:51:54.957974   19385 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:51:54.958076   19385 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:51:54.958263   19385 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:51:54.958479   19385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:51:54.973238   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:51:55.013766   19385 start.go:303] post-start completed in 215.516171ms
	I0216 09:51:55.013844   19385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:51:55.013901   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:55.067881   19385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:51:55.158820   19385 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:51:55.164243   19385 fix.go:56] fixHost completed within 5.269753422s
	I0216 09:51:55.164269   19385 start.go:83] releasing machines lock for "old-k8s-version-356000", held for 5.269824402s
	I0216 09:51:55.164362   19385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356000
	I0216 09:51:55.216932   19385 ssh_runner.go:195] Run: cat /version.json
	I0216 09:51:55.216947   19385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:51:55.217005   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:55.217029   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:55.274077   19385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:51:55.274080   19385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54075 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/old-k8s-version-356000/id_rsa Username:docker}
	I0216 09:51:55.474653   19385 ssh_runner.go:195] Run: systemctl --version
	I0216 09:51:55.479733   19385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 09:51:55.484739   19385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 09:51:55.484792   19385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 09:51:55.500434   19385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 09:51:55.515791   19385 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 09:51:55.515822   19385 start.go:475] detecting cgroup driver to use...
	I0216 09:51:55.515856   19385 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:51:55.515963   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:51:55.544813   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 09:51:55.561533   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:51:55.577773   19385 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:51:55.577904   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:51:55.594866   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:51:55.611231   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:51:55.627576   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:51:55.643598   19385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:51:55.659220   19385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:51:55.675938   19385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:51:55.690992   19385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:51:55.706274   19385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:51:55.768197   19385 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:51:55.856299   19385 start.go:475] detecting cgroup driver to use...
	I0216 09:51:55.856321   19385 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:51:55.856392   19385 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:51:55.874768   19385 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:51:55.874864   19385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:51:55.893702   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:51:55.924473   19385 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:51:55.929793   19385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:51:55.946714   19385 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:51:55.978697   19385 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:51:56.047239   19385 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:51:56.145217   19385 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:51:56.145359   19385 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:51:56.176795   19385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:51:56.241687   19385 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:51:56.505247   19385 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:51:56.529623   19385 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:51:56.594884   19385 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 09:51:56.595001   19385 cli_runner.go:164] Run: docker exec -t old-k8s-version-356000 dig +short host.docker.internal
	I0216 09:51:56.697970   19385 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:51:56.698066   19385 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:51:56.702598   19385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:51:56.720306   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:51:56.773593   19385 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 09:51:56.773674   19385 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:51:56.792956   19385 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:51:56.792972   19385 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:51:56.793041   19385 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:51:56.809051   19385 ssh_runner.go:195] Run: which lz4
	I0216 09:51:56.813408   19385 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 09:51:56.818223   19385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 09:51:56.818265   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0216 09:52:03.273245   19385 docker.go:649] Took 6.459753 seconds to copy over tarball
	I0216 09:52:03.273326   19385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 09:52:04.958952   19385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.685571992s)
	I0216 09:52:04.958966   19385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 09:52:05.009468   19385 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 09:52:05.026155   19385 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 09:52:05.056861   19385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:52:05.122382   19385 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:52:06.124494   19385 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.002065027s)
	I0216 09:52:06.124630   19385 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:52:06.143834   19385 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 09:52:06.143854   19385 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 09:52:06.143865   19385 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 09:52:06.148509   19385 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:52:06.149274   19385 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:52:06.149306   19385 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 09:52:06.149328   19385 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:52:06.149514   19385 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:52:06.149549   19385 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:52:06.149718   19385 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:52:06.149744   19385 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 09:52:06.155738   19385 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:52:06.155835   19385 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:52:06.157074   19385 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 09:52:06.157528   19385 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:52:06.157903   19385 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:52:06.157967   19385 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 09:52:06.157951   19385 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:52:06.157973   19385 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:52:08.153014   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 09:52:08.174537   19385 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0216 09:52:08.174577   19385 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 09:52:08.174645   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 09:52:08.188430   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:52:08.193590   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0216 09:52:08.211135   19385 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0216 09:52:08.211160   19385 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:52:08.211223   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 09:52:08.225026   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 09:52:08.230930   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 09:52:08.244902   19385 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0216 09:52:08.244933   19385 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 09:52:08.244999   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 09:52:08.247329   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:52:08.264678   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0216 09:52:08.266911   19385 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0216 09:52:08.266932   19385 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:52:08.267000   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 09:52:08.271822   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:52:08.278025   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 09:52:08.285806   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 09:52:08.287313   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:52:08.293244   19385 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0216 09:52:08.293272   19385 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:52:08.293340   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 09:52:08.300262   19385 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0216 09:52:08.300293   19385 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 09:52:08.300368   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 09:52:08.310536   19385 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0216 09:52:08.310579   19385 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:52:08.310676   19385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 09:52:08.317878   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 09:52:08.325226   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0216 09:52:08.331985   19385 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 09:52:08.467263   19385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 09:52:08.488209   19385 cache_images.go:92] LoadImages completed in 2.344286279s
	W0216 09:52:08.488262   19385 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0216 09:52:08.488348   19385 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:52:08.539023   19385 cni.go:84] Creating CNI manager for ""
	I0216 09:52:08.539040   19385 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 09:52:08.539054   19385 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:52:08.539072   19385 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-356000 NodeName:old-k8s-version-356000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 09:52:08.539166   19385 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-356000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-356000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:52:08.539229   19385 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-356000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:52:08.539293   19385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 09:52:08.554778   19385 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:52:08.554857   19385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:52:08.569459   19385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 09:52:08.599301   19385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 09:52:08.628825   19385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 09:52:08.657379   19385 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:52:08.661951   19385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:52:08.679309   19385 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000 for IP: 192.168.76.2
	I0216 09:52:08.679331   19385 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:52:08.679561   19385 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:52:08.679663   19385 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:52:08.679817   19385 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/client.key
	I0216 09:52:08.679957   19385 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key.31bdca25
	I0216 09:52:08.680037   19385 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key
	I0216 09:52:08.680254   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:52:08.680301   19385 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:52:08.680311   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:52:08.680341   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:52:08.680374   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:52:08.680402   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:52:08.680470   19385 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:52:08.680981   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:52:08.722114   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 09:52:08.765205   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:52:08.807470   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/old-k8s-version-356000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 09:52:08.848351   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:52:08.890010   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:52:08.932069   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:52:08.972936   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:52:09.015066   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:52:09.057031   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:52:09.098723   19385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:52:09.148212   19385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:52:09.178126   19385 ssh_runner.go:195] Run: openssl version
	I0216 09:52:09.183636   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:52:09.200876   19385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:52:09.205728   19385 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:52:09.205795   19385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:52:09.212352   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:52:09.228537   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:52:09.244930   19385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:52:09.249475   19385 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:52:09.249536   19385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:52:09.256762   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:52:09.272754   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:52:09.288887   19385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:52:09.293189   19385 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:52:09.293243   19385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:52:09.299717   19385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:52:09.315889   19385 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:52:09.320196   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 09:52:09.327426   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 09:52:09.333856   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 09:52:09.340324   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 09:52:09.347243   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 09:52:09.355436   19385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 09:52:09.362435   19385 kubeadm.go:404] StartCluster: {Name:old-k8s-version-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-356000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:52:09.362562   19385 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:52:09.381056   19385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:52:09.397318   19385 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 09:52:09.397338   19385 kubeadm.go:636] restartCluster start
	I0216 09:52:09.397396   19385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 09:52:09.413057   19385 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:09.413230   19385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-356000
	I0216 09:52:09.468482   19385 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-356000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:52:09.468642   19385 kubeconfig.go:146] "old-k8s-version-356000" context is missing from /Users/jenkins/minikube-integration/17936-1021/kubeconfig - will repair!
	I0216 09:52:09.468961   19385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:52:09.470453   19385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 09:52:09.486011   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:09.486083   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:09.502145   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:09.986239   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:09.986341   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:10.002933   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:10.486564   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:10.486673   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:10.504976   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:10.986190   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:10.986339   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:11.003368   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:11.486684   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:11.486782   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:11.504584   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:11.986524   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:11.986630   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:12.003752   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:12.486185   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:12.486321   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:12.504175   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:12.986189   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:12.986319   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:13.003503   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:13.486941   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:13.487051   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:13.506361   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:13.988170   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:13.988296   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:14.005633   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:14.487971   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:14.488087   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:14.505072   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:14.986220   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:14.986319   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:15.003798   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:15.486234   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:15.486342   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:15.503278   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:15.986209   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:15.986311   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:16.004664   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:16.486293   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:16.486424   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:16.504315   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:16.986977   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:16.987103   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:17.004335   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:17.487903   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:17.487991   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:17.505277   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:17.986821   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:17.986978   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:18.004238   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:18.487027   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:18.487109   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:18.503703   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:18.986899   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:18.986977   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:19.004239   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:19.486979   19385 api_server.go:166] Checking apiserver status ...
	I0216 09:52:19.487059   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:52:19.504199   19385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:52:19.504215   19385 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 09:52:19.504231   19385 kubeadm.go:1135] stopping kube-system containers ...
	I0216 09:52:19.504316   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:52:19.521866   19385 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 09:52:19.539790   19385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:52:19.555171   19385 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 16 17:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 16 17:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 16 17:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 16 17:48 /etc/kubernetes/scheduler.conf
	
	I0216 09:52:19.555237   19385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 09:52:19.570120   19385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 09:52:19.586852   19385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 09:52:19.602161   19385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 09:52:19.617444   19385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:52:19.633529   19385 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 09:52:19.633543   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:52:19.695764   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:52:20.368495   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:52:20.567609   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:52:20.651441   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:52:20.753053   19385 api_server.go:52] waiting for apiserver process to appear ...
	I0216 09:52:20.753116   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:21.253537   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:21.754083   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:22.253769   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:22.753433   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:23.253379   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:23.753989   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:24.253315   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:24.754444   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:25.253842   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:25.753903   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:26.253525   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:26.753338   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:27.253368   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:27.754339   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:28.253605   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:28.753493   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:29.254271   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:29.753426   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:30.253490   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:30.754085   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:31.253522   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:31.753563   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:32.254968   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:32.753479   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:33.253467   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:33.754137   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:34.253642   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:34.753522   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:35.253494   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:35.754425   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:36.253557   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:36.753534   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:37.253943   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:37.753897   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:38.254352   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:38.755594   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:39.253562   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:39.753606   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:40.253619   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:40.754903   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:41.253804   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:41.754447   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:42.254307   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:42.754304   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:43.253692   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:43.754115   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:44.253909   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:44.754577   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:45.253900   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:45.754787   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:46.253921   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:46.753720   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:47.253976   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:47.754143   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:48.253730   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:48.753724   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:49.255074   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:49.754412   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:50.255853   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:50.754652   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:51.253780   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:51.755202   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:52.254356   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:52.753796   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:53.254075   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:53.754421   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:54.253829   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:54.754775   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:55.254530   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:55.754049   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:56.253908   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:56.753973   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:57.254738   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:57.754900   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:58.255000   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:58.754308   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:59.253927   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:52:59.754010   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:00.254234   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:00.754731   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:01.254027   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:01.754161   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:02.254675   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:02.753986   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:03.254277   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:03.754953   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:04.254125   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:04.754949   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:05.254276   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:05.755141   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:06.254785   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:06.754201   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:07.254325   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:07.754181   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:08.254219   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:08.755081   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:09.255157   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:09.754335   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:10.255076   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:10.754195   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:11.254406   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:11.754896   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:12.254383   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:12.755134   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:13.254202   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:13.755560   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:14.254856   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:14.754321   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:15.254416   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:15.755306   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:16.254323   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:16.755322   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:17.254387   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:17.755676   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:18.255308   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:18.754370   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:19.254336   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:19.755553   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:20.254513   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:20.754755   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:20.774385   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.774398   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:20.774483   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:20.792013   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.792027   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:20.792105   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:20.811119   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.811131   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:20.811196   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:20.832121   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.832135   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:20.832213   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:20.850106   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.850120   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:20.850191   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:20.868251   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.868265   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:20.868335   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:20.887489   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.887502   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:20.887572   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:20.907102   19385 logs.go:276] 0 containers: []
	W0216 09:53:20.907116   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:20.907127   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:20.907158   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:20.948910   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:20.948927   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:20.969800   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:20.969819   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:21.109466   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:21.109489   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:21.109499   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:21.132332   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:21.132348   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:23.698461   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:23.715631   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:23.734973   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.735003   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:23.735078   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:23.753650   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.753665   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:23.753737   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:23.772173   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.772186   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:23.772259   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:23.789813   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.789826   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:23.789892   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:23.808790   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.808805   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:23.808891   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:23.829118   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.829133   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:23.829208   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:23.848185   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.848199   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:23.848269   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:23.866568   19385 logs.go:276] 0 containers: []
	W0216 09:53:23.866583   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:23.866597   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:23.866607   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:23.931585   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:23.931601   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:23.931608   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:23.952938   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:23.952951   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:24.017881   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:24.017912   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:24.064926   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:24.064943   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:26.586177   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:26.602656   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:26.621117   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.621132   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:26.621210   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:26.639494   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.639508   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:26.639577   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:26.658864   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.658879   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:26.658937   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:26.678202   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.678219   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:26.678309   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:26.696547   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.696561   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:26.696630   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:26.716334   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.716347   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:26.716413   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:26.735466   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.735480   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:26.735624   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:26.754896   19385 logs.go:276] 0 containers: []
	W0216 09:53:26.754909   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:26.754916   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:26.754923   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:26.775932   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:26.775951   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:26.841208   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:26.841223   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:26.885539   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:26.885555   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:26.905802   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:26.905818   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:26.986821   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:29.487252   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:29.504673   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:29.522636   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.522651   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:29.522723   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:29.541434   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.541447   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:29.541514   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:29.560493   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.560507   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:29.560581   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:29.578518   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.578532   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:29.578609   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:29.598412   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.598426   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:29.598492   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:29.616205   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.616218   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:29.616283   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:29.635938   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.635952   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:29.636026   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:29.653548   19385 logs.go:276] 0 containers: []
	W0216 09:53:29.653563   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:29.653571   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:29.653578   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:29.696499   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:29.696516   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:29.717753   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:29.717770   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:29.910325   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:29.910345   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:29.910353   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:29.932554   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:29.932580   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:32.500507   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:32.521607   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:32.545090   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.545109   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:32.545201   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:32.563656   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.563671   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:32.563737   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:32.610728   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.610744   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:32.610815   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:32.629584   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.629597   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:32.629673   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:32.648382   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.648395   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:32.648458   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:32.668170   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.668185   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:32.668253   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:32.688463   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.688478   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:32.688553   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:32.707586   19385 logs.go:276] 0 containers: []
	W0216 09:53:32.707600   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:32.707607   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:32.707617   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:32.774027   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:32.774043   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:32.819511   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:32.819528   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:32.839861   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:32.839899   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:32.903563   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:32.903642   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:32.903650   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:35.425440   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:35.441823   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:35.459470   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.459484   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:35.459555   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:35.477879   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.477893   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:35.477960   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:35.498619   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.498637   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:35.498717   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:35.519570   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.519582   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:35.519654   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:35.541689   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.541702   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:35.541763   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:35.560712   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.560726   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:35.560800   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:35.580337   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.580353   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:35.580423   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:35.599495   19385 logs.go:276] 0 containers: []
	W0216 09:53:35.599509   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:35.599516   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:35.599525   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:35.619101   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:35.619115   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:35.686410   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:35.686426   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:35.686434   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:35.707508   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:35.707523   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:35.773029   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:35.773043   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:38.315848   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:38.332040   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:38.351807   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.351820   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:38.351884   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:38.370936   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.370950   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:38.371016   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:38.389506   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.389519   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:38.389585   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:38.409350   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.409366   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:38.409439   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:38.428415   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.428460   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:38.428570   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:38.446877   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.446892   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:38.446966   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:38.465755   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.465784   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:38.465850   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:38.484487   19385 logs.go:276] 0 containers: []
	W0216 09:53:38.484509   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:38.484525   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:38.484537   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:38.550692   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:38.550708   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:38.594107   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:38.594127   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:38.616092   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:38.616145   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:38.685884   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:38.685896   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:38.685904   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:41.208584   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:41.225427   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:41.244329   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.244343   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:41.244408   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:41.262967   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.262981   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:41.263048   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:41.282170   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.282183   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:41.282247   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:41.300609   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.300631   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:41.300702   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:41.319848   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.319862   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:41.319930   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:41.337942   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.337955   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:41.338026   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:41.356339   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.356353   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:41.356425   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:41.376786   19385 logs.go:276] 0 containers: []
	W0216 09:53:41.376801   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:41.376809   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:41.376817   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:41.418911   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:41.418926   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:41.439101   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:41.439139   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:41.508962   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:41.508975   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:41.508983   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:41.532054   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:41.532068   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:44.100143   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:44.126334   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:44.145546   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.145559   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:44.145629   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:44.163600   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.163615   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:44.163690   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:44.182332   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.182353   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:44.182430   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:44.202137   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.202150   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:44.202214   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:44.220761   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.220775   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:44.220853   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:44.241209   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.241222   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:44.241289   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:44.260415   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.260428   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:44.260502   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:44.278726   19385 logs.go:276] 0 containers: []
	W0216 09:53:44.278740   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:44.278747   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:44.278754   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:44.323794   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:44.323809   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:44.343813   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:44.343833   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:44.421600   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:44.421612   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:44.421620   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:44.443903   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:44.443919   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:47.009584   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:47.028872   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:47.048614   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.048636   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:47.048708   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:47.068844   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.068857   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:47.068924   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:47.087022   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.087036   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:47.087125   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:47.104628   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.104642   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:47.104711   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:47.121732   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.121761   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:47.121835   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:47.139893   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.139922   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:47.139990   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:47.159477   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.159522   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:47.159643   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:47.178121   19385 logs.go:276] 0 containers: []
	W0216 09:53:47.178135   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:47.178143   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:47.178151   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:47.244684   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:47.244714   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:47.244742   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:47.267020   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:47.267038   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:47.330917   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:47.330936   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:47.376735   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:47.376751   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:49.899104   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:49.917024   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:49.934274   19385 logs.go:276] 0 containers: []
	W0216 09:53:49.934289   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:49.934353   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:49.951985   19385 logs.go:276] 0 containers: []
	W0216 09:53:49.952000   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:49.952076   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:49.970316   19385 logs.go:276] 0 containers: []
	W0216 09:53:49.970330   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:49.970391   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:49.990952   19385 logs.go:276] 0 containers: []
	W0216 09:53:49.990966   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:49.991033   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:50.013126   19385 logs.go:276] 0 containers: []
	W0216 09:53:50.013142   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:50.013207   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:50.036783   19385 logs.go:276] 0 containers: []
	W0216 09:53:50.036799   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:50.036871   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:50.056960   19385 logs.go:276] 0 containers: []
	W0216 09:53:50.056974   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:50.057045   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:50.112511   19385 logs.go:276] 0 containers: []
	W0216 09:53:50.112526   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:50.112534   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:50.112549   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:50.157945   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:50.157975   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:50.178314   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:50.178334   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:50.257947   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:50.257963   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:50.257973   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:50.279597   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:50.279619   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:52.844984   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:52.863390   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:52.881930   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.881944   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:52.882017   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:52.899846   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.899863   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:52.899952   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:52.918175   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.918189   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:52.918256   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:52.937364   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.937379   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:52.937449   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:52.956968   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.956996   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:52.957064   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:52.975135   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.975151   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:52.975220   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:52.994471   19385 logs.go:276] 0 containers: []
	W0216 09:53:52.994489   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:52.994561   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:53.015573   19385 logs.go:276] 0 containers: []
	W0216 09:53:53.015588   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:53.015595   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:53.015607   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:53.035649   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:53.035666   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:53.113952   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:53.113964   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:53.113976   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:53.135835   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:53.135850   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:53.201630   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:53.201646   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:55.745132   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:55.764349   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:55.786528   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.786547   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:55.786644   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:55.805691   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.805709   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:55.805820   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:55.825438   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.825463   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:55.825588   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:55.846473   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.846487   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:55.846569   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:55.866859   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.866874   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:55.866945   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:55.885265   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.885295   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:55.885364   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:55.905065   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.905080   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:55.905147   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:55.923460   19385 logs.go:276] 0 containers: []
	W0216 09:53:55.923473   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:55.923482   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:55.923500   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:55.966325   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:55.966341   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:55.988665   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:55.988682   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:56.065855   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:56.065868   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:56.065876   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:56.087811   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:56.087830   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:53:58.652997   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:53:58.670392   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:53:58.687938   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.687953   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:53:58.688020   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:53:58.706319   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.706334   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:53:58.706404   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:53:58.725002   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.725016   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:53:58.725087   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:53:58.744376   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.744389   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:53:58.744461   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:53:58.767691   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.767747   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:53:58.767854   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:53:58.824276   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.824290   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:53:58.824380   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:53:58.844962   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.844978   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:53:58.845055   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:53:58.864187   19385 logs.go:276] 0 containers: []
	W0216 09:53:58.864201   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:53:58.864208   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:53:58.864216   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:53:58.908113   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:53:58.908130   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:53:58.928698   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:53:58.928714   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:53:58.997014   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:53:58.997025   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:53:58.997037   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:53:59.018999   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:53:59.019012   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:01.584078   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:01.601217   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:01.618325   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.618339   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:01.618405   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:01.637017   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.637031   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:01.637099   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:01.656535   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.656548   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:01.656617   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:01.675950   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.675979   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:01.676047   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:01.695582   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.695596   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:01.695669   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:01.714978   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.714994   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:01.715059   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:01.733954   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.733968   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:01.734036   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:01.752804   19385 logs.go:276] 0 containers: []
	W0216 09:54:01.752819   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:01.752827   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:01.752834   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:01.796731   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:01.796746   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:01.819268   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:01.819284   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:01.888366   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:01.888378   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:01.888404   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:01.910690   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:01.910706   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:04.478186   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:04.495567   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:04.514284   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.514299   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:04.514370   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:04.531966   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.531979   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:04.532048   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:04.550044   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.550057   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:04.550130   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:04.568796   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.568810   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:04.568875   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:04.588525   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.588539   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:04.588608   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:04.607022   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.607036   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:04.607106   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:04.625950   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.626000   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:04.626068   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:04.644196   19385 logs.go:276] 0 containers: []
	W0216 09:54:04.644210   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:04.644217   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:04.644228   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:04.690377   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:04.690398   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:04.710645   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:04.710685   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:04.778703   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:04.778715   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:04.778722   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:04.800634   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:04.800649   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:07.366498   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:07.383550   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:07.405709   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.405726   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:07.405817   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:07.424510   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.424524   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:07.424592   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:07.443493   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.443509   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:07.443599   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:07.463740   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.463756   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:07.463837   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:07.483637   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.483650   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:07.483745   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:07.503631   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.503645   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:07.503721   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:07.522129   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.522143   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:07.522205   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:07.541162   19385 logs.go:276] 0 containers: []
	W0216 09:54:07.541175   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:07.541182   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:07.541194   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:07.585361   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:07.585377   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:07.606374   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:07.606389   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:07.673325   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:07.673337   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:07.673345   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:07.694701   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:07.694717   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:10.259023   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:10.278196   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:10.323073   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.323087   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:10.323162   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:10.343545   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.343559   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:10.343632   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:10.362315   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.362330   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:10.362400   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:10.381497   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.381512   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:10.381592   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:10.402200   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.402218   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:10.402283   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:10.421115   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.421129   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:10.421199   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:10.439291   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.439305   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:10.439371   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:10.458074   19385 logs.go:276] 0 containers: []
	W0216 09:54:10.458090   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:10.458097   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:10.458104   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:10.500591   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:10.500606   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:10.520517   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:10.520532   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:10.589384   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:10.589403   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:10.589411   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:10.611072   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:10.611091   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:13.175191   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:13.192735   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:13.212062   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.212076   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:13.212158   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:13.231985   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.232000   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:13.232074   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:13.252508   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.252523   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:13.252594   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:13.273620   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.273638   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:13.273713   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:13.319505   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.319518   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:13.319591   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:13.340836   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.340852   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:13.340931   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:13.366092   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.366108   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:13.366237   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:13.386406   19385 logs.go:276] 0 containers: []
	W0216 09:54:13.386419   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:13.386426   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:13.386434   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:13.429226   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:13.429260   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:13.449924   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:13.449940   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:13.516556   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:13.516568   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:13.516590   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:13.539376   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:13.539391   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:16.107916   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:16.125865   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:16.143502   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.143516   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:16.143590   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:16.161647   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.161661   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:16.161733   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:16.181362   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.181377   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:16.181444   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:16.200436   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.200449   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:16.200518   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:16.219056   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.219070   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:16.219139   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:16.238356   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.238369   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:16.238438   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:16.256619   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.256633   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:16.256702   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:16.276182   19385 logs.go:276] 0 containers: []
	W0216 09:54:16.276195   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:16.276202   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:16.276211   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:16.298118   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:16.298132   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:16.363819   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:16.363836   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:16.408277   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:16.408296   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:16.430125   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:16.430141   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:16.501555   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:19.003742   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:19.021866   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:19.040527   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.040541   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:19.040610   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:19.058892   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.058907   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:19.058995   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:19.078082   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.078102   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:19.078166   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:19.096345   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.108745   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:19.108825   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:19.129259   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.129276   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:19.129344   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:19.148673   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.148687   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:19.148753   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:19.167142   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.167156   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:19.167227   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:19.186628   19385 logs.go:276] 0 containers: []
	W0216 09:54:19.186642   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:19.186649   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:19.186656   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:19.253631   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:19.253643   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:19.253651   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:19.276335   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:19.276350   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:19.343124   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:19.343139   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:19.386561   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:19.386579   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:21.908496   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:21.925931   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:21.945974   19385 logs.go:276] 0 containers: []
	W0216 09:54:21.945987   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:21.946062   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:21.964324   19385 logs.go:276] 0 containers: []
	W0216 09:54:21.964337   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:21.964402   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:21.983648   19385 logs.go:276] 0 containers: []
	W0216 09:54:21.983661   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:21.983721   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:22.001483   19385 logs.go:276] 0 containers: []
	W0216 09:54:22.001497   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:22.001564   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:22.020354   19385 logs.go:276] 0 containers: []
	W0216 09:54:22.020368   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:22.020431   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:22.038976   19385 logs.go:276] 0 containers: []
	W0216 09:54:22.038993   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:22.039082   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:22.057805   19385 logs.go:276] 0 containers: []
	W0216 09:54:22.057819   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:22.057893   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:22.075963   19385 logs.go:276] 0 containers: []
	W0216 09:54:22.075978   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:22.075986   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:22.075994   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:22.148814   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:22.148825   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:22.148847   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:22.172764   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:22.172780   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:22.239940   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:22.239956   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:22.287617   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:22.287637   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:24.809895   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:24.826515   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:24.845210   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.845223   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:24.845290   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:24.862595   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.862609   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:24.862678   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:24.883189   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.883203   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:24.883261   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:24.901317   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.901331   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:24.901404   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:24.920327   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.920354   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:24.920422   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:24.938732   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.938746   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:24.938821   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:24.958826   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.958839   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:24.958912   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:24.979015   19385 logs.go:276] 0 containers: []
	W0216 09:54:24.979029   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:24.979036   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:24.979049   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:25.022154   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:25.022172   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:25.043877   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:25.043918   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:25.113928   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:25.113939   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:25.113946   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:25.135427   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:25.135442   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:27.702126   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:27.719051   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:27.738456   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.738472   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:27.738569   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:27.759278   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.759292   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:27.759369   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:27.780687   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.780702   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:27.780778   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:27.811728   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.811743   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:27.811819   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:27.835641   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.835654   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:27.835722   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:27.854268   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.854282   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:27.854352   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:27.873053   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.873067   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:27.873136   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:27.892669   19385 logs.go:276] 0 containers: []
	W0216 09:54:27.892688   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:27.892696   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:27.892703   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:27.912052   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:27.912066   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:27.976887   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:27.976937   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:27.976945   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:27.999349   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:27.999363   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:28.066024   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:28.066054   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:30.612675   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:30.629750   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:30.648466   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.648486   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:30.648555   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:30.667457   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.667479   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:30.667558   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:30.686147   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.686161   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:30.686241   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:30.704447   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.704460   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:30.704526   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:30.724709   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.724725   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:30.724809   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:30.745750   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.745767   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:30.745836   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:30.769264   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.769278   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:30.769344   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:30.815117   19385 logs.go:276] 0 containers: []
	W0216 09:54:30.815128   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:30.815134   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:30.815140   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:30.862546   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:30.862571   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:30.883290   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:30.883319   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:30.949742   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:30.949754   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:30.949765   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:30.971172   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:30.971189   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:33.539334   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:33.556699   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:33.574608   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.574621   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:33.574690   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:33.594617   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.594647   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:33.594709   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:33.613166   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.613200   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:33.613314   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:33.633028   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.633042   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:33.633169   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:33.653557   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.653571   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:33.653638   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:33.672378   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.672393   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:33.672468   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:33.691723   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.691737   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:33.691804   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:33.711370   19385 logs.go:276] 0 containers: []
	W0216 09:54:33.711384   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:33.711394   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:33.711401   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:33.754465   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:33.754481   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:33.776159   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:33.776176   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:33.853028   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:33.853041   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:33.853049   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:33.874393   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:33.874407   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:36.439708   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:36.456202   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:36.475023   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.475038   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:36.475104   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:36.495303   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.495323   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:36.495411   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:36.516799   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.516817   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:36.516943   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:36.541110   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.541148   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:36.541272   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:36.621846   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.621861   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:36.621935   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:36.641312   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.641327   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:36.641402   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:36.660072   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.660085   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:36.660152   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:36.680319   19385 logs.go:276] 0 containers: []
	W0216 09:54:36.680334   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:36.680343   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:36.680356   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:36.726383   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:36.726399   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:36.747894   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:36.747912   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:36.822496   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:36.822507   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:36.822524   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:36.843626   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:36.843641   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:39.411385   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:39.428964   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:39.447220   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.447234   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:39.447296   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:39.465578   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.465592   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:39.465677   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:39.484530   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.484543   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:39.484619   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:39.504115   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.504128   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:39.504196   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:39.523688   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.523701   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:39.523772   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:39.543141   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.543154   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:39.543221   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:39.561611   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.561626   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:39.561699   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:39.579722   19385 logs.go:276] 0 containers: []
	W0216 09:54:39.579735   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:39.579742   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:39.579750   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:39.623100   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:39.623117   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:39.644335   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:39.644372   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:39.712994   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:39.713004   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:39.713012   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:39.734225   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:39.734238   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:42.297502   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:42.314957   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:42.333810   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.333825   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:42.333891   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:42.352744   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.352758   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:42.352820   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:42.370986   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.370998   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:42.371067   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:42.389791   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.389805   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:42.389875   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:42.408562   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.408581   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:42.408666   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:42.428441   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.428459   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:42.428560   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:42.446736   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.446749   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:42.446813   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:42.465822   19385 logs.go:276] 0 containers: []
	W0216 09:54:42.465839   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:42.465852   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:42.465881   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:42.510765   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:42.510781   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:42.531535   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:42.531589   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:42.604039   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:42.604089   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:42.604117   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:42.625407   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:42.625420   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:45.191855   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:45.209104   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:45.228161   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.228174   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:45.228243   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:45.248850   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.248863   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:45.248935   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:45.268997   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.269015   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:45.269090   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:45.289986   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.290001   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:45.290070   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:45.323370   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.323389   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:45.323465   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:45.342263   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.342277   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:45.342344   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:45.360856   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.360870   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:45.360935   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:45.380518   19385 logs.go:276] 0 containers: []
	W0216 09:54:45.380535   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:45.380542   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:45.380552   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:45.424713   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:45.424727   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:45.445142   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:45.445157   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:45.513462   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:45.513489   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:45.513516   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:45.535452   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:45.535467   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:48.104853   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:48.122219   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:48.140464   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.140476   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:48.140548   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:48.158643   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.158673   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:48.158765   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:48.177818   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.177832   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:48.177898   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:48.197765   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.197778   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:48.197840   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:48.216240   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.216255   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:48.216322   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:48.236316   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.236330   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:48.236398   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:48.258692   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.258705   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:48.258774   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:48.279278   19385 logs.go:276] 0 containers: []
	W0216 09:54:48.279295   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:48.279304   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:48.279314   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:48.340517   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:48.340536   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:48.362482   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:48.362526   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:48.432402   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:48.432421   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:48.432430   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:48.454395   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:48.454422   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:51.022000   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:51.040312   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:51.057863   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.057880   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:51.057960   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:51.076235   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.076249   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:51.076315   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:51.095660   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.095673   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:51.095741   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:51.113886   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.113901   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:51.113968   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:51.133543   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.133558   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:51.133639   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:51.152690   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.152712   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:51.152785   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:51.173144   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.173158   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:51.173227   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:51.190697   19385 logs.go:276] 0 containers: []
	W0216 09:54:51.190715   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:51.190723   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:51.190733   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:51.265956   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:51.265968   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:51.265982   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:51.327207   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:51.327224   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:51.413181   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:51.413196   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:51.456695   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:51.456711   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:53.977615   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:53.999240   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:54.019686   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.019715   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:54.019813   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:54.040261   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.040276   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:54.040361   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:54.059821   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.059860   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:54.060030   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:54.079203   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.079219   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:54.079314   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:54.120125   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.120146   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:54.120226   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:54.140950   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.140964   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:54.141035   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:54.159276   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.159289   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:54.159361   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:54.180341   19385 logs.go:276] 0 containers: []
	W0216 09:54:54.180356   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:54.180363   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:54.180372   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:54.234238   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:54.234258   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:54.258540   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:54.258559   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:54.331104   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:54.331119   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:54.331135   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:54.356094   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:54.356111   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:56.923673   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:56.942802   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:56.962111   19385 logs.go:276] 0 containers: []
	W0216 09:54:56.962127   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:56.962205   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:56.983266   19385 logs.go:276] 0 containers: []
	W0216 09:54:56.983294   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:56.983364   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:57.002181   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.002197   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:57.002273   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:57.022357   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.022373   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:57.022445   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:57.044342   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.044359   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:57.044435   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:54:57.064872   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.064887   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:54:57.064979   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:54:57.085422   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.085438   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:54:57.085541   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:54:57.104290   19385 logs.go:276] 0 containers: []
	W0216 09:54:57.104308   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:54:57.104319   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:54:57.104329   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:54:57.128293   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:54:57.128312   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:54:57.207441   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:54:57.207457   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:54:57.207466   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:54:57.234129   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:54:57.234178   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:54:57.320161   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:54:57.320177   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:54:59.873041   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:54:59.894615   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:54:59.915219   19385 logs.go:276] 0 containers: []
	W0216 09:54:59.915240   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:54:59.915320   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:54:59.938236   19385 logs.go:276] 0 containers: []
	W0216 09:54:59.938254   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:54:59.938333   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:54:59.958576   19385 logs.go:276] 0 containers: []
	W0216 09:54:59.958592   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:54:59.958667   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:54:59.978697   19385 logs.go:276] 0 containers: []
	W0216 09:54:59.978714   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:54:59.978800   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:54:59.999448   19385 logs.go:276] 0 containers: []
	W0216 09:54:59.999464   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:54:59.999531   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:00.019723   19385 logs.go:276] 0 containers: []
	W0216 09:55:00.019751   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:00.019850   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:00.040905   19385 logs.go:276] 0 containers: []
	W0216 09:55:00.040936   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:00.041044   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:00.061107   19385 logs.go:276] 0 containers: []
	W0216 09:55:00.061122   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:00.061130   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:00.061140   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:00.116873   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:00.116897   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:00.143085   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:00.143103   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:00.216821   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:00.216833   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:00.216841   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:00.243416   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:00.243434   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:02.829131   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:02.846368   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:02.862025   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.862039   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:02.862108   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:02.879690   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.879705   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:02.879774   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:02.896928   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.896945   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:02.897030   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:02.913013   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.913027   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:02.913094   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:02.930417   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.930433   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:02.930509   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:02.947703   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.947717   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:02.947793   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:02.964292   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.964307   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:02.964372   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:02.981319   19385 logs.go:276] 0 containers: []
	W0216 09:55:02.981332   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:02.981340   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:02.981347   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:03.006413   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:03.006432   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:03.083209   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:03.083227   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:03.148262   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:03.148284   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:03.176475   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:03.176507   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:03.246548   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:05.746922   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:05.765626   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:05.784486   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.784500   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:05.784565   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:05.802540   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.802552   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:05.802621   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:05.820384   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.820423   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:05.820581   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:05.844654   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.844669   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:05.844739   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:05.862715   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.862729   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:05.862794   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:05.880399   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.880412   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:05.880479   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:05.901694   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.901708   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:05.901775   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:05.920376   19385 logs.go:276] 0 containers: []
	W0216 09:55:05.920392   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:05.920399   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:05.920407   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:06.000394   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:06.000407   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:06.000415   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:06.025454   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:06.025475   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:06.110459   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:06.110474   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:06.167927   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:06.167944   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:08.689207   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:08.705642   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:08.723011   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.723026   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:08.723098   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:08.742387   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.742402   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:08.742468   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:08.761632   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.761646   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:08.761707   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:08.781093   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.781106   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:08.781178   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:08.800173   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.800188   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:08.800251   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:08.818494   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.818507   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:08.818590   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:08.836413   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.836428   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:08.836496   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:08.857796   19385 logs.go:276] 0 containers: []
	W0216 09:55:08.857807   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:08.857815   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:08.857822   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:08.905784   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:08.905800   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:08.928325   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:08.928341   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:09.009049   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:09.009063   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:09.009072   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:09.034007   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:09.034031   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:11.619878   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:11.641391   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:11.659750   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.659765   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:11.659838   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:11.678215   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.678229   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:11.678298   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:11.697659   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.697677   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:11.697747   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:11.715210   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.715223   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:11.715290   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:11.734417   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.734437   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:11.734526   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:11.757150   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.757174   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:11.757275   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:11.810013   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.810028   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:11.810105   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:11.832601   19385 logs.go:276] 0 containers: []
	W0216 09:55:11.832618   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:11.832627   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:11.832636   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:11.884951   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:11.884973   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:11.907291   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:11.907344   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:12.026614   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:12.026633   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:12.026640   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:12.048963   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:12.048977   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:14.616232   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:14.636416   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:14.654202   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.654217   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:14.654288   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:14.673610   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.673623   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:14.673706   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:14.691284   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.691299   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:14.691361   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:14.709217   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.709231   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:14.709294   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:14.727679   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.727694   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:14.727764   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:14.745814   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.745828   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:14.745897   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:14.763020   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.763036   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:14.763105   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:14.782290   19385 logs.go:276] 0 containers: []
	W0216 09:55:14.782318   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:14.782326   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:14.782333   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:14.857396   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:14.857408   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:14.857416   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:14.879397   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:14.879428   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:14.942831   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:14.942846   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:14.990809   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:14.990827   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:17.513541   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:17.533731   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:17.552994   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.553009   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:17.553081   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:17.572337   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.572354   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:17.572428   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:17.591760   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.591773   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:17.591843   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:17.612965   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.612978   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:17.613046   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:17.635491   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.635510   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:17.635626   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:17.654198   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.654218   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:17.654294   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:17.673073   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.673088   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:17.673183   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:17.691401   19385 logs.go:276] 0 containers: []
	W0216 09:55:17.691416   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:17.691423   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:17.691430   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:17.738277   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:17.738299   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:17.761514   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:17.761542   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:17.830615   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:17.830642   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:17.830649   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:17.852857   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:17.852872   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:20.417656   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:20.435765   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:20.453740   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.453754   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:20.453818   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:20.471611   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.471624   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:20.471692   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:20.490428   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.490440   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:20.490503   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:20.508992   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.509005   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:20.509070   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:20.526870   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.526883   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:20.526969   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:20.544708   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.544745   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:20.544827   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:20.563001   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.563028   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:20.563102   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:20.582594   19385 logs.go:276] 0 containers: []
	W0216 09:55:20.582608   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:20.582616   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:20.582625   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:20.629890   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:20.629908   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:20.652528   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:20.652542   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:20.720729   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:20.720741   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:20.720749   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:20.742330   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:20.742345   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:23.308533   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:23.332969   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:23.361399   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.361420   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:23.361544   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:23.382761   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.382774   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:23.382866   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:23.401506   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.401521   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:23.401588   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:23.418590   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.418604   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:23.418672   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:23.445043   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.445064   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:23.445223   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:23.476260   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.476280   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:23.476373   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:23.497432   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.497448   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:23.497535   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:23.517179   19385 logs.go:276] 0 containers: []
	W0216 09:55:23.517198   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:23.517212   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:23.517228   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:23.585127   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:23.585146   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:23.606044   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:23.606087   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:23.690195   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:23.690235   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:23.690258   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:23.712599   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:23.712613   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:26.292566   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:26.309822   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:26.329951   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.329966   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:26.330021   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:26.348545   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.348559   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:26.348622   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:26.367577   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.367593   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:26.367666   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:26.386572   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.386592   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:26.386659   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:26.404822   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.404837   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:26.404903   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:26.422862   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.422875   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:26.422947   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:26.442842   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.442857   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:26.442921   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:26.462721   19385 logs.go:276] 0 containers: []
	W0216 09:55:26.462735   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:26.462743   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:26.462750   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:26.513062   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:26.513084   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:26.539149   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:26.539188   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:26.629276   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:26.629297   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:26.629316   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:26.656536   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:26.656556   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:29.222569   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:29.240590   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:29.261220   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.261233   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:29.261307   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:29.281713   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.281727   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:29.281790   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:29.301176   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.301190   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:29.301258   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:29.320774   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.320788   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:29.320852   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:29.339972   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.339990   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:29.340079   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:29.359952   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.359966   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:29.360032   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:29.380169   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.380200   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:29.380285   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:29.405070   19385 logs.go:276] 0 containers: []
	W0216 09:55:29.405085   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:29.405093   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:29.405100   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:29.434469   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:29.434488   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:29.536347   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:29.536369   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:29.652810   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:29.652838   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:29.679594   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:29.679638   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:29.753019   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:32.253449   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:32.273292   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:32.296831   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.296847   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:32.296921   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:32.318520   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.318535   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:32.318602   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:32.339634   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.339655   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:32.339733   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:32.362962   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.362980   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:32.363048   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:32.382418   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.382438   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:32.382532   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:32.402022   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.402037   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:32.402104   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:32.421113   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.421129   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:32.421199   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:32.442783   19385 logs.go:276] 0 containers: []
	W0216 09:55:32.442804   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:32.442814   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:32.442824   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:32.467410   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:32.467429   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:32.550703   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:32.550720   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:32.603872   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:32.603893   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:32.628087   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:32.628105   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:32.714344   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:35.214605   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:35.236109   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:35.259150   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.259165   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:35.259236   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:35.282268   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.282284   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:35.282362   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:35.304123   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.304137   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:35.304205   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:35.321956   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.321969   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:35.322037   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:35.341687   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.341708   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:35.341801   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:35.363490   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.363506   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:35.363589   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:35.385679   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.385704   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:35.385782   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:35.410587   19385 logs.go:276] 0 containers: []
	W0216 09:55:35.410606   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:35.410614   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:35.410622   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:35.435190   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:35.435209   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:35.511610   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:35.511627   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:35.563895   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:35.563916   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:35.589564   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:35.589583   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:35.663324   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:38.163896   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:38.184514   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:38.218977   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.218993   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:38.219064   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:38.243872   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.243892   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:38.243995   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:38.269946   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.269961   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:38.270032   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:38.292443   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.292461   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:38.292546   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:38.317088   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.317103   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:38.317167   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:38.338993   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.339012   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:38.339091   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:38.362298   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.362316   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:38.362384   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:38.387004   19385 logs.go:276] 0 containers: []
	W0216 09:55:38.387029   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:38.387045   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:38.387061   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:38.436721   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:38.436739   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:38.459758   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:38.459777   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:38.541034   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:38.541047   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:38.541079   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:38.565169   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:38.565189   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:41.139835   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:41.157694   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:41.175681   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.175696   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:41.175764   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:41.194132   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.194146   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:41.194212   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:41.212893   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.212906   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:41.212968   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:41.231378   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.231394   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:41.231475   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:41.253311   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.253325   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:41.253422   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:41.273197   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.273211   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:41.273282   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:41.313138   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.313153   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:41.313223   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:41.332083   19385 logs.go:276] 0 containers: []
	W0216 09:55:41.332096   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:41.332104   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:41.332111   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:41.377453   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:41.377470   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:41.397352   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:41.397367   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:41.467553   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:41.467567   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:41.467575   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:41.490074   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:41.490090   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:44.055510   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:44.072116   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:44.089925   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.112564   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:44.112626   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:44.133194   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.133237   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:44.133364   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:44.152195   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.152209   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:44.152275   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:44.171433   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.171447   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:44.171510   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:44.190663   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.190677   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:44.190743   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:44.209761   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.209775   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:44.209844   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:44.228438   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.228452   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:44.228517   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:44.249053   19385 logs.go:276] 0 containers: []
	W0216 09:55:44.249068   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:44.249075   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:44.249082   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:44.298961   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:44.298983   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:44.320039   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:44.320056   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:44.398359   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:44.398392   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:44.398400   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:44.421714   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:44.421729   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:46.986229   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:47.003372   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:47.021014   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.021027   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:47.021116   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:47.039916   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.039929   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:47.039997   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:47.057903   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.057919   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:47.057998   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:47.074919   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.074933   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:47.074999   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:47.092977   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.092991   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:47.093058   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:47.110626   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.110640   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:47.110713   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:47.130071   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.130086   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:47.130148   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:47.149595   19385 logs.go:276] 0 containers: []
	W0216 09:55:47.149610   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:47.149617   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:47.149625   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:47.193591   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:47.193606   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:47.214108   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:47.214123   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:47.281452   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:47.281462   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:47.281469   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:47.302717   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:47.302732   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:49.867855   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:49.885394   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:49.903962   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.903977   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:49.904044   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:49.921960   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.921973   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:49.922040   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:49.940022   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.940036   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:49.940102   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:49.958124   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.958137   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:49.958202   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:49.978273   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.978287   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:49.978368   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:49.998394   19385 logs.go:276] 0 containers: []
	W0216 09:55:49.998410   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:49.998486   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:50.020179   19385 logs.go:276] 0 containers: []
	W0216 09:55:50.020204   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:50.020292   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:50.044226   19385 logs.go:276] 0 containers: []
	W0216 09:55:50.044241   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:50.044264   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:50.044276   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:50.132928   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:50.132943   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:50.153536   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:50.153553   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:50.222644   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:50.222654   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:50.222662   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:50.244401   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:50.244416   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:52.809117   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:52.828980   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:52.848276   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.848290   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:52.848358   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:52.867824   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.867844   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:52.867909   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:52.887200   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.887212   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:52.887277   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:52.905253   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.905274   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:52.905373   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:52.922057   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.922071   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:52.922132   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:52.938995   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.939009   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:52.939073   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:52.957117   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.957131   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:52.957195   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:52.975682   19385 logs.go:276] 0 containers: []
	W0216 09:55:52.975694   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:52.975701   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:52.975707   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:53.043648   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:53.043667   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:53.088941   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:53.088958   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:53.111097   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:53.111112   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:53.177916   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:53.177930   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:53.177937   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:55.702319   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:55.719736   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:55.747039   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.747064   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:55.747140   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:55.766205   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.766220   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:55.766295   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:55.784407   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.784423   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:55.784495   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:55.807409   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.807424   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:55.807495   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:55.826066   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.826087   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:55.826179   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:55.852117   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.852137   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:55.852232   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:55.868685   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.868699   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:55.868763   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:55.885713   19385 logs.go:276] 0 containers: []
	W0216 09:55:55.885727   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:55.885735   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:55.885745   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:55.962744   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:55:55.962788   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:55.962795   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:55.985986   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:55.986000   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:56.069236   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:56.069250   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:56.114511   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:56.114528   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:58.643033   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:55:58.660905   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:55:58.679352   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.679364   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:55:58.679418   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:55:58.697387   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.697401   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:55:58.697466   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:55:58.716880   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.716897   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:55:58.716965   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:55:58.743580   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.743599   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:55:58.743703   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:55:58.773281   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.773299   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:55:58.773375   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:55:58.811982   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.812003   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:55:58.812088   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:55:58.830914   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.830935   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:55:58.831041   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:55:58.847984   19385 logs.go:276] 0 containers: []
	W0216 09:55:58.847997   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:55:58.848005   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:55:58.848011   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:55:58.871544   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:55:58.871560   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:55:58.949810   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:55:58.949844   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:55:58.994343   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:55:58.994357   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:55:59.014809   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:55:59.014849   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:55:59.083514   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:01.584420   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:01.603860   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:01.623825   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.623840   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:01.623911   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:01.642576   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.642593   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:01.642667   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:01.659836   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.659851   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:01.659924   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:01.677562   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.677578   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:01.677653   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:01.697194   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.697212   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:01.697284   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:01.715460   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.715477   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:01.715542   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:01.734116   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.734131   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:01.734204   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:01.752002   19385 logs.go:276] 0 containers: []
	W0216 09:56:01.752018   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:01.752027   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:01.752035   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:01.797846   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:01.797866   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:01.820548   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:01.820567   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:01.885812   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:01.885849   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:01.885860   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:01.910990   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:01.911007   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:04.479373   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:04.496775   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:04.515749   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.515766   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:04.515827   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:04.534067   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.534084   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:04.534154   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:04.551541   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.551555   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:04.551623   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:04.569109   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.569126   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:04.569198   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:04.589069   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.589085   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:04.589156   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:04.607096   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.607115   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:04.607179   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:04.624946   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.624961   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:04.625038   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:04.643635   19385 logs.go:276] 0 containers: []
	W0216 09:56:04.643650   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:04.643658   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:04.643668   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:04.716227   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:04.716239   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:04.716248   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:04.739804   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:04.739820   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:04.815981   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:04.815996   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:04.868830   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:04.868850   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:07.390748   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:07.411283   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:07.430393   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.430406   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:07.430474   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:07.448282   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.448297   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:07.448371   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:07.466290   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.466303   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:07.466368   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:07.484564   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.484577   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:07.484644   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:07.502567   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.502586   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:07.502652   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:07.519112   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.519128   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:07.519199   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:07.536563   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.536577   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:07.536643   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:07.554649   19385 logs.go:276] 0 containers: []
	W0216 09:56:07.554663   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:07.554686   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:07.554713   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:07.604980   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:07.604997   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:07.627224   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:07.627241   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:07.714499   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:07.714513   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:07.714524   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:07.740030   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:07.740048   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:10.325867   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:10.342969   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:10.360485   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.360511   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:10.360643   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:10.378063   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.378077   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:10.378155   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:10.396924   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.396940   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:10.397010   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:10.414660   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.414676   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:10.414773   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:10.435489   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.435504   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:10.435583   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:10.455043   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.455056   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:10.455125   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:10.473482   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.473497   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:10.473562   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:10.490363   19385 logs.go:276] 0 containers: []
	W0216 09:56:10.490380   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:10.490387   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:10.490395   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:10.511602   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:10.511633   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:10.588909   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:10.588923   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:10.588931   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:10.614939   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:10.614956   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:10.683872   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:10.683886   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:13.232060   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:13.258444   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:13.275481   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.275511   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:13.275575   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:13.293205   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.293220   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:13.293294   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:13.310774   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.310792   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:13.310920   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:13.334577   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.334625   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:13.334720   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:13.355846   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.355861   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:13.355926   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:13.372104   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.372123   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:13.372194   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:13.389098   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.389112   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:13.389185   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:13.406173   19385 logs.go:276] 0 containers: []
	W0216 09:56:13.406187   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:13.406196   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:13.406206   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:13.434815   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:13.434832   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:13.518298   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:13.518321   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:13.518335   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:13.545504   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:13.545519   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:13.616027   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:13.616045   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:16.166619   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:16.184136   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:16.201460   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.201474   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:16.201542   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:16.220077   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.220091   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:16.220165   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:16.237926   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.237941   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:16.238005   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:16.256473   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.256487   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:16.256551   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:16.273459   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.273473   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:16.273539   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:16.290467   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.290481   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:16.290546   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:16.309193   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.309210   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:16.309286   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:16.326371   19385 logs.go:276] 0 containers: []
	W0216 09:56:16.326384   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:16.326391   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:16.326401   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:16.387993   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:16.388007   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:16.433425   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:16.433444   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:16.454312   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:16.454336   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:16.542341   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:16.542361   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:16.542377   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:19.066441   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:19.086936   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 09:56:19.108215   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.111218   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 09:56:19.111300   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 09:56:19.133716   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.133741   19385 logs.go:278] No container was found matching "etcd"
	I0216 09:56:19.133837   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 09:56:19.153587   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.153605   19385 logs.go:278] No container was found matching "coredns"
	I0216 09:56:19.153715   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 09:56:19.172366   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.172381   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 09:56:19.172450   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 09:56:19.191134   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.191145   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 09:56:19.191198   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 09:56:19.209478   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.209493   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 09:56:19.209563   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 09:56:19.229606   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.229621   19385 logs.go:278] No container was found matching "kindnet"
	I0216 09:56:19.229683   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 09:56:19.250283   19385 logs.go:276] 0 containers: []
	W0216 09:56:19.250306   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 09:56:19.250319   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 09:56:19.250330   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 09:56:19.325607   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 09:56:19.325618   19385 logs.go:123] Gathering logs for Docker ...
	I0216 09:56:19.325628   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 09:56:19.347108   19385 logs.go:123] Gathering logs for container status ...
	I0216 09:56:19.347122   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 09:56:19.410060   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 09:56:19.410075   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 09:56:19.453593   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 09:56:19.453610   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 09:56:21.976380   19385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:56:21.994041   19385 kubeadm.go:640] restartCluster took 4m12.591753037s
	W0216 09:56:21.994085   19385 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0216 09:56:21.994105   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:56:22.414680   19385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:56:22.432352   19385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:56:22.447356   19385 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:56:22.447416   19385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:56:22.462674   19385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:56:22.462701   19385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:56:22.518151   19385 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:56:22.518192   19385 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:56:22.772019   19385 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:56:22.772102   19385 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:56:22.772189   19385 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:56:22.942328   19385 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:56:22.955196   19385 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:56:22.962215   19385 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:56:23.054163   19385 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:56:23.074716   19385 out.go:204]   - Generating certificates and keys ...
	I0216 09:56:23.074785   19385 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:56:23.074841   19385 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:56:23.074911   19385 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:56:23.074966   19385 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:56:23.075028   19385 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:56:23.075073   19385 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:56:23.075119   19385 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:56:23.075164   19385 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:56:23.075215   19385 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:56:23.075278   19385 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:56:23.075312   19385 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:56:23.075366   19385 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:56:23.247782   19385 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:56:23.427334   19385 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:56:23.561545   19385 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:56:23.628154   19385 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:56:23.628678   19385 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:56:23.650213   19385 out.go:204]   - Booting up control plane ...
	I0216 09:56:23.650365   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:56:23.650470   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:56:23.650560   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:56:23.650684   19385 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:56:23.650871   19385 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:57:03.639159   19385 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:57:03.639700   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:03.639846   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:08.641597   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:08.641778   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:18.644555   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:18.644699   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:38.646280   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:38.646430   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:58:18.649737   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:58:18.649899   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:58:18.649910   19385 kubeadm.go:322] 
	I0216 09:58:18.649942   19385 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:58:18.649979   19385 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:58:18.649991   19385 kubeadm.go:322] 
	I0216 09:58:18.650018   19385 kubeadm.go:322] This error is likely caused by:
	I0216 09:58:18.650043   19385 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:58:18.650122   19385 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:58:18.650129   19385 kubeadm.go:322] 
	I0216 09:58:18.650202   19385 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:58:18.650224   19385 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:58:18.650249   19385 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:58:18.650252   19385 kubeadm.go:322] 
	I0216 09:58:18.650338   19385 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:58:18.650412   19385 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:58:18.650481   19385 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:58:18.650528   19385 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:58:18.650590   19385 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:58:18.650616   19385 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:58:18.653477   19385 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:58:18.653568   19385 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:58:18.653737   19385 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:58:18.653828   19385 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:58:18.653909   19385 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:58:18.653974   19385 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 09:58:18.654044   19385 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 09:58:18.654072   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:58:19.076900   19385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:58:19.094331   19385 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:58:19.114074   19385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:58:19.129946   19385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:58:19.129970   19385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:58:19.191424   19385 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:58:19.191469   19385 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:58:19.551196   19385 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:58:19.551284   19385 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:58:19.551381   19385 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:58:19.720240   19385 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:58:19.720963   19385 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:58:19.727407   19385 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:58:19.798675   19385 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:58:19.820119   19385 out.go:204]   - Generating certificates and keys ...
	I0216 09:58:19.820189   19385 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:58:19.820253   19385 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:58:19.820322   19385 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:58:19.820370   19385 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:58:19.820428   19385 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:58:19.820477   19385 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:58:19.820534   19385 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:58:19.820587   19385 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:58:19.820652   19385 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:58:19.820713   19385 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:58:19.820745   19385 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:58:19.820788   19385 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:58:19.957136   19385 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:58:20.279617   19385 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:58:20.368731   19385 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:58:20.443289   19385 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:58:20.444168   19385 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:58:20.465940   19385 out.go:204]   - Booting up control plane ...
	I0216 09:58:20.466059   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:58:20.466137   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:58:20.466194   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:58:20.466263   19385 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:58:20.466377   19385 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:59:00.456106   19385 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:59:00.456612   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:00.456763   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:05.458120   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:05.458270   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:15.461378   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:15.461766   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:35.463321   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:35.463527   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 10:00:15.465026   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 10:00:15.465189   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 10:00:15.465200   19385 kubeadm.go:322] 
	I0216 10:00:15.465234   19385 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 10:00:15.465267   19385 kubeadm.go:322] 	timed out waiting for the condition
	I0216 10:00:15.465272   19385 kubeadm.go:322] 
	I0216 10:00:15.465304   19385 kubeadm.go:322] This error is likely caused by:
	I0216 10:00:15.465335   19385 kubeadm.go:322] 	- The kubelet is not running
	I0216 10:00:15.465416   19385 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 10:00:15.465421   19385 kubeadm.go:322] 
	I0216 10:00:15.465496   19385 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 10:00:15.465526   19385 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 10:00:15.465551   19385 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 10:00:15.465556   19385 kubeadm.go:322] 
	I0216 10:00:15.465636   19385 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 10:00:15.465707   19385 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 10:00:15.465782   19385 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 10:00:15.465826   19385 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 10:00:15.465891   19385 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 10:00:15.465921   19385 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 10:00:15.470210   19385 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 10:00:15.470273   19385 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 10:00:15.470372   19385 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 10:00:15.470459   19385 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 10:00:15.470534   19385 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 10:00:15.470604   19385 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 10:00:15.470627   19385 kubeadm.go:406] StartCluster complete in 8m6.098704701s
	I0216 10:00:15.470719   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 10:00:15.488039   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.488052   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 10:00:15.488122   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 10:00:15.508099   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.508117   19385 logs.go:278] No container was found matching "etcd"
	I0216 10:00:15.508194   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 10:00:15.533356   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.533372   19385 logs.go:278] No container was found matching "coredns"
	I0216 10:00:15.533514   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 10:00:15.563458   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.563477   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 10:00:15.563548   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 10:00:15.582274   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.582288   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 10:00:15.582358   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 10:00:15.624621   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.624636   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 10:00:15.624703   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 10:00:15.644158   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.644172   19385 logs.go:278] No container was found matching "kindnet"
	I0216 10:00:15.644238   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 10:00:15.663073   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.663087   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 10:00:15.663095   19385 logs.go:123] Gathering logs for Docker ...
	I0216 10:00:15.663102   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 10:00:15.685020   19385 logs.go:123] Gathering logs for container status ...
	I0216 10:00:15.685036   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 10:00:15.750347   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 10:00:15.750361   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 10:00:15.796119   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 10:00:15.796134   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 10:00:15.816457   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 10:00:15.816475   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 10:00:15.883186   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0216 10:00:15.883203   19385 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 10:00:15.883259   19385 out.go:239] * 
	* 
	W0216 10:00:15.883291   19385 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 10:00:15.883307   19385 out.go:239] * 
	* 
	W0216 10:00:15.884183   19385 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 10:00:15.947677   19385 out.go:177] 
	W0216 10:00:15.990622   19385 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 10:00:15.990676   19385 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 10:00:15.990745   19385 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 10:00:16.054560   19385 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-356000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:51:50.249201454Z",
	            "FinishedAt": "2024-02-16T17:51:47.463182294Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3796cb96e0afd4653a016009a08ea7784172e6af1b37db6d9e51767cab847db4",
	            "SandboxKey": "/var/run/docker/netns/3796cb96e0af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "90b836fe9f235eb417d06d2677831883e0644a25bed3bcd671f8e46a12d2f8a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (421.853254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25: (1.422547016s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-862000 sudo                                 | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:46 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-862000 sudo                                 | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-862000 sudo                                 | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:46 PST |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-862000 sudo find                            | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:46 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-862000 sudo crio                            | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:46 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-862000                                      | kubenet-862000         | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:46 PST |
	| start   | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:46 PST | 16 Feb 24 09:49 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-575000             | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:49 PST | 16 Feb 24 09:49 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:49 PST | 16 Feb 24 09:49 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-575000                  | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:49 PST | 16 Feb 24 09:49 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:49 PST | 16 Feb 24 09:55 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-356000        | old-k8s-version-356000 | jenkins | v1.32.0 | 16 Feb 24 09:50 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-356000                              | old-k8s-version-356000 | jenkins | v1.32.0 | 16 Feb 24 09:51 PST | 16 Feb 24 09:51 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-356000             | old-k8s-version-356000 | jenkins | v1.32.0 | 16 Feb 24 09:51 PST | 16 Feb 24 09:51 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-356000                              | old-k8s-version-356000 | jenkins | v1.32.0 | 16 Feb 24 09:51 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-575000 image list                           | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	| delete  | -p no-preload-575000                                   | no-preload-575000      | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	| start   | -p embed-certs-944000                                  | embed-certs-944000     | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:56 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-944000            | embed-certs-944000     | jenkins | v1.32.0 | 16 Feb 24 09:56 PST | 16 Feb 24 09:56 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-944000                                  | embed-certs-944000     | jenkins | v1.32.0 | 16 Feb 24 09:56 PST | 16 Feb 24 09:57 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-944000                 | embed-certs-944000     | jenkins | v1.32.0 | 16 Feb 24 09:57 PST | 16 Feb 24 09:57 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-944000                                  | embed-certs-944000     | jenkins | v1.32.0 | 16 Feb 24 09:57 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 09:57:00
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 09:57:00.974988   19823 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:57:00.975246   19823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:57:00.975251   19823 out.go:304] Setting ErrFile to fd 2...
	I0216 09:57:00.975255   19823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:57:00.975436   19823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:57:00.976850   19823 out.go:298] Setting JSON to false
	I0216 09:57:00.999560   19823 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5191,"bootTime":1708101029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 09:57:00.999678   19823 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 09:57:01.021244   19823 out.go:177] * [embed-certs-944000] minikube v1.32.0 on Darwin 14.3.1
	I0216 09:57:01.063203   19823 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 09:57:01.063270   19823 notify.go:220] Checking for updates...
	I0216 09:57:01.085503   19823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:57:01.107062   19823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 09:57:01.128367   19823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 09:57:01.151313   19823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 09:57:01.193183   19823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 09:57:01.214943   19823 config.go:182] Loaded profile config "embed-certs-944000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:57:01.215597   19823 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 09:57:01.271860   19823 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 09:57:01.272024   19823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:57:01.376496   19823 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:57:01.365845142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:57:01.418710   19823 out.go:177] * Using the docker driver based on existing profile
	I0216 09:57:01.440617   19823 start.go:299] selected driver: docker
	I0216 09:57:01.440643   19823 start.go:903] validating driver "docker" against &{Name:embed-certs-944000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-944000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:57:01.440756   19823 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 09:57:01.445063   19823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 09:57:01.554522   19823 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 17:57:01.543676565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 09:57:01.554757   19823 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 09:57:01.554817   19823 cni.go:84] Creating CNI manager for ""
	I0216 09:57:01.554831   19823 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:57:01.554842   19823 start_flags.go:323] config:
	{Name:embed-certs-944000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-944000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:57:01.576509   19823 out.go:177] * Starting control plane node embed-certs-944000 in cluster embed-certs-944000
	I0216 09:57:01.599376   19823 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 09:57:01.620379   19823 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 09:57:01.662563   19823 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 09:57:01.662601   19823 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 09:57:01.662631   19823 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 09:57:01.662647   19823 cache.go:56] Caching tarball of preloaded images
	I0216 09:57:01.662819   19823 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 09:57:01.662838   19823 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 09:57:01.663586   19823 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/config.json ...
	I0216 09:57:01.713704   19823 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 09:57:01.713724   19823 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 09:57:01.713746   19823 cache.go:194] Successfully downloaded all kic artifacts
	I0216 09:57:01.713799   19823 start.go:365] acquiring machines lock for embed-certs-944000: {Name:mk5c6f7bc3b9eac835eb306dfceb948be03c7824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 09:57:01.713877   19823 start.go:369] acquired machines lock for "embed-certs-944000" in 59.034µs
	I0216 09:57:01.713923   19823 start.go:96] Skipping create...Using existing machine configuration
	I0216 09:57:01.713935   19823 fix.go:54] fixHost starting: 
	I0216 09:57:01.714207   19823 cli_runner.go:164] Run: docker container inspect embed-certs-944000 --format={{.State.Status}}
	I0216 09:57:01.764961   19823 fix.go:102] recreateIfNeeded on embed-certs-944000: state=Stopped err=<nil>
	W0216 09:57:01.764990   19823 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 09:57:01.786896   19823 out.go:177] * Restarting existing docker container for "embed-certs-944000" ...
	I0216 09:57:03.639159   19385 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:57:03.639700   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:03.639846   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:01.829814   19823 cli_runner.go:164] Run: docker start embed-certs-944000
	I0216 09:57:02.072093   19823 cli_runner.go:164] Run: docker container inspect embed-certs-944000 --format={{.State.Status}}
	I0216 09:57:02.128021   19823 kic.go:430] container "embed-certs-944000" state is running.
	I0216 09:57:02.128633   19823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-944000
	I0216 09:57:02.187639   19823 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/config.json ...
	I0216 09:57:02.188082   19823 machine.go:88] provisioning docker machine ...
	I0216 09:57:02.188108   19823 ubuntu.go:169] provisioning hostname "embed-certs-944000"
	I0216 09:57:02.188186   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:02.252819   19823 main.go:141] libmachine: Using SSH client type: native
	I0216 09:57:02.253483   19823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54214 <nil> <nil>}
	I0216 09:57:02.253508   19823 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-944000 && echo "embed-certs-944000" | sudo tee /etc/hostname
	I0216 09:57:02.254966   19823 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 09:57:05.415755   19823 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-944000
	
	I0216 09:57:05.415852   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:05.467024   19823 main.go:141] libmachine: Using SSH client type: native
	I0216 09:57:05.467325   19823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54214 <nil> <nil>}
	I0216 09:57:05.467338   19823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-944000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-944000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-944000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 09:57:05.603749   19823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:57:05.603770   19823 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 09:57:05.603790   19823 ubuntu.go:177] setting up certificates
	I0216 09:57:05.603801   19823 provision.go:83] configureAuth start
	I0216 09:57:05.603872   19823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-944000
	I0216 09:57:05.655153   19823 provision.go:138] copyHostCerts
	I0216 09:57:05.655257   19823 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 09:57:05.655268   19823 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 09:57:05.655397   19823 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 09:57:05.655651   19823 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 09:57:05.655658   19823 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 09:57:05.655722   19823 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 09:57:05.655875   19823 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 09:57:05.655881   19823 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 09:57:05.655966   19823 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 09:57:05.656109   19823 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-944000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-944000]
	I0216 09:57:05.708460   19823 provision.go:172] copyRemoteCerts
	I0216 09:57:05.708524   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 09:57:05.708584   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:05.760159   19823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54214 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/embed-certs-944000/id_rsa Username:docker}
	I0216 09:57:05.861797   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 09:57:05.901792   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 09:57:05.942292   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0216 09:57:05.984958   19823 provision.go:86] duration metric: configureAuth took 381.133776ms
	I0216 09:57:05.997623   19823 ubuntu.go:193] setting minikube options for container-runtime
	I0216 09:57:05.997776   19823 config.go:182] Loaded profile config "embed-certs-944000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:57:05.997840   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:06.060677   19823 main.go:141] libmachine: Using SSH client type: native
	I0216 09:57:06.060965   19823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54214 <nil> <nil>}
	I0216 09:57:06.060974   19823 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 09:57:06.200792   19823 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 09:57:06.200808   19823 ubuntu.go:71] root file system type: overlay
	I0216 09:57:06.200898   19823 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 09:57:06.200980   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:06.255074   19823 main.go:141] libmachine: Using SSH client type: native
	I0216 09:57:06.255463   19823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54214 <nil> <nil>}
	I0216 09:57:06.255519   19823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 09:57:06.417105   19823 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 09:57:06.417208   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:06.467950   19823 main.go:141] libmachine: Using SSH client type: native
	I0216 09:57:06.468248   19823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54214 <nil> <nil>}
	I0216 09:57:06.468262   19823 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 09:57:06.615512   19823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 09:57:06.615533   19823 machine.go:91] provisioned docker machine in 4.427355602s
	I0216 09:57:06.615544   19823 start.go:300] post-start starting for "embed-certs-944000" (driver="docker")
	I0216 09:57:06.615552   19823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 09:57:06.615621   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 09:57:06.615677   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:06.666479   19823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54214 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/embed-certs-944000/id_rsa Username:docker}
	I0216 09:57:06.769824   19823 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 09:57:06.773814   19823 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 09:57:06.773842   19823 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 09:57:06.773851   19823 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 09:57:06.773857   19823 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 09:57:06.773865   19823 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 09:57:06.773953   19823 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 09:57:06.774102   19823 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 09:57:06.774266   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 09:57:06.789005   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:57:06.828542   19823 start.go:303] post-start completed in 212.984352ms
	I0216 09:57:06.828626   19823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:57:06.828682   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:06.881241   19823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54214 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/embed-certs-944000/id_rsa Username:docker}
	I0216 09:57:06.973752   19823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 09:57:06.979115   19823 fix.go:56] fixHost completed within 5.265070813s
	I0216 09:57:06.979136   19823 start.go:83] releasing machines lock for "embed-certs-944000", held for 5.265149615s
	I0216 09:57:06.979242   19823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-944000
	I0216 09:57:07.037600   19823 ssh_runner.go:195] Run: cat /version.json
	I0216 09:57:07.037688   19823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 09:57:07.037696   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:07.037826   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:07.099455   19823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54214 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/embed-certs-944000/id_rsa Username:docker}
	I0216 09:57:07.099780   19823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54214 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/embed-certs-944000/id_rsa Username:docker}
	I0216 09:57:07.294409   19823 ssh_runner.go:195] Run: systemctl --version
	I0216 09:57:07.299295   19823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 09:57:07.304484   19823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 09:57:07.334667   19823 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 09:57:07.334793   19823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 09:57:07.350623   19823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0216 09:57:07.350638   19823 start.go:475] detecting cgroup driver to use...
	I0216 09:57:07.350656   19823 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:57:07.350773   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:57:07.378844   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 09:57:07.395318   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 09:57:07.412076   19823 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 09:57:07.412141   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 09:57:07.428746   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:57:07.445807   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 09:57:07.462326   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 09:57:07.478785   19823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 09:57:07.494884   19823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 09:57:07.511412   19823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 09:57:07.527091   19823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 09:57:07.542682   19823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:57:07.601351   19823 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 09:57:07.697915   19823 start.go:475] detecting cgroup driver to use...
	I0216 09:57:07.697934   19823 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 09:57:07.698014   19823 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 09:57:07.720900   19823 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 09:57:07.720975   19823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 09:57:07.741098   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 09:57:07.770912   19823 ssh_runner.go:195] Run: which cri-dockerd
	I0216 09:57:07.776110   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 09:57:07.795586   19823 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 09:57:07.844740   19823 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 09:57:07.937020   19823 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 09:57:08.044432   19823 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 09:57:08.044521   19823 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 09:57:08.079701   19823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:57:08.155814   19823 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 09:57:08.473449   19823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 09:57:08.491463   19823 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 09:57:08.510598   19823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:57:08.528412   19823 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 09:57:08.593432   19823 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 09:57:08.658169   19823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:57:08.721508   19823 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 09:57:08.758467   19823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 09:57:08.777535   19823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 09:57:08.841326   19823 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 09:57:08.934951   19823 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 09:57:08.935060   19823 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 09:57:08.939778   19823 start.go:543] Will wait 60s for crictl version
	I0216 09:57:08.939832   19823 ssh_runner.go:195] Run: which crictl
	I0216 09:57:08.944040   19823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 09:57:09.001978   19823 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 09:57:09.002068   19823 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:57:09.027729   19823 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 09:57:08.641597   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:08.641778   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:09.077118   19823 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0216 09:57:09.077248   19823 cli_runner.go:164] Run: docker exec -t embed-certs-944000 dig +short host.docker.internal
	I0216 09:57:09.200652   19823 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 09:57:09.201622   19823 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 09:57:09.206230   19823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:57:09.225009   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:09.277782   19823 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 09:57:09.277859   19823 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:57:09.295948   19823 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 09:57:09.295971   19823 docker.go:615] Images already preloaded, skipping extraction
	I0216 09:57:09.296064   19823 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 09:57:09.314419   19823 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 09:57:09.314444   19823 cache_images.go:84] Images are preloaded, skipping loading
	I0216 09:57:09.314521   19823 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 09:57:09.361637   19823 cni.go:84] Creating CNI manager for ""
	I0216 09:57:09.361659   19823 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:57:09.361677   19823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 09:57:09.361694   19823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-944000 NodeName:embed-certs-944000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 09:57:09.361801   19823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-944000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 09:57:09.361860   19823 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-944000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-944000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 09:57:09.361925   19823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0216 09:57:09.376961   19823 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 09:57:09.377041   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 09:57:09.392592   19823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0216 09:57:09.421600   19823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 09:57:09.450954   19823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0216 09:57:09.480850   19823 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 09:57:09.485340   19823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 09:57:09.502993   19823 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000 for IP: 192.168.67.2
	I0216 09:57:09.503015   19823 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:57:09.503161   19823 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 09:57:09.503214   19823 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 09:57:09.503299   19823 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/client.key
	I0216 09:57:09.503360   19823 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/apiserver.key.c7fa3a9e
	I0216 09:57:09.503419   19823 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/proxy-client.key
	I0216 09:57:09.503628   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 09:57:09.503683   19823 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 09:57:09.503694   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 09:57:09.503733   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 09:57:09.503768   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 09:57:09.503803   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 09:57:09.503880   19823 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 09:57:09.504432   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 09:57:09.545005   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 09:57:09.586565   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 09:57:09.628878   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/embed-certs-944000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 09:57:09.669673   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 09:57:09.710670   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 09:57:09.753865   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 09:57:09.800323   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 09:57:09.847952   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 09:57:09.889752   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 09:57:09.932386   19823 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 09:57:09.976268   19823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 09:57:10.006104   19823 ssh_runner.go:195] Run: openssl version
	I0216 09:57:10.012387   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 09:57:10.028020   19823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:57:10.032952   19823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:57:10.033025   19823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 09:57:10.039591   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 09:57:10.054866   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 09:57:10.070789   19823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 09:57:10.075638   19823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 09:57:10.075703   19823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 09:57:10.083289   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 09:57:10.098110   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 09:57:10.115163   19823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 09:57:10.119630   19823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 09:57:10.119673   19823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 09:57:10.127191   19823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 09:57:10.144210   19823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 09:57:10.148482   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 09:57:10.156195   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 09:57:10.163149   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 09:57:10.169908   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 09:57:10.176294   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 09:57:10.182755   19823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 09:57:10.189590   19823 kubeadm.go:404] StartCluster: {Name:embed-certs-944000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-944000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 09:57:10.189713   19823 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:57:10.206655   19823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 09:57:10.222504   19823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 09:57:10.222523   19823 kubeadm.go:636] restartCluster start
	I0216 09:57:10.222574   19823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 09:57:10.237895   19823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:10.238024   19823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-944000
	I0216 09:57:10.290785   19823 kubeconfig.go:135] verify returned: extract IP: "embed-certs-944000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 09:57:10.290960   19823 kubeconfig.go:146] "embed-certs-944000" context is missing from /Users/jenkins/minikube-integration/17936-1021/kubeconfig - will repair!
	I0216 09:57:10.291301   19823 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 09:57:10.292901   19823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 09:57:10.308631   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:10.308785   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:10.324711   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:10.808750   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:10.808906   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:10.831268   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:11.310449   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:11.310525   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:11.327302   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:11.809640   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:11.809715   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:11.826666   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:12.309402   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:12.309481   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:12.326816   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:12.810147   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:12.810233   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:12.828244   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:13.310748   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:13.310887   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:13.329084   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:13.808794   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:13.808876   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:13.827175   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:14.309591   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:14.309666   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:14.327319   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:14.809069   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:14.809185   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:14.825959   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:15.308898   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:15.309019   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:15.326755   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:15.809631   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:15.809728   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:15.826477   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:18.644555   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:18.644699   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:16.309124   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:16.309223   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:16.326206   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:16.809239   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:16.809327   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:16.826280   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:17.309588   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:17.309671   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:17.325954   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:17.809268   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:17.809405   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:17.827656   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:18.308870   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:18.308946   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:18.325725   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:18.810740   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:18.810827   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:18.826817   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:19.310823   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:19.310963   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:19.328419   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:19.810242   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:19.810378   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:19.828867   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:20.310170   19823 api_server.go:166] Checking apiserver status ...
	I0216 09:57:20.310243   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 09:57:20.327673   19823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:20.327688   19823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 09:57:20.327704   19823 kubeadm.go:1135] stopping kube-system containers ...
	I0216 09:57:20.327787   19823 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 09:57:20.345597   19823 docker.go:483] Stopping containers: [d6f0ed9a3429 6e63f9a84a40 fd5cd991e3ec db622a140877 51f5c2d43e0c a4c3ec278195 549c7786082b 120fe22f2392 0a9eed87d29c b93c769f4ad0 47c9b40a9935 76daee2539bb 30b9f6de4b0b ff03aaf8ea3c 99d1a0716e2b]
	I0216 09:57:20.345690   19823 ssh_runner.go:195] Run: docker stop d6f0ed9a3429 6e63f9a84a40 fd5cd991e3ec db622a140877 51f5c2d43e0c a4c3ec278195 549c7786082b 120fe22f2392 0a9eed87d29c b93c769f4ad0 47c9b40a9935 76daee2539bb 30b9f6de4b0b ff03aaf8ea3c 99d1a0716e2b
	I0216 09:57:20.364852   19823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 09:57:20.383406   19823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:57:20.398458   19823 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 16 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 16 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 16 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 16 17:56 /etc/kubernetes/scheduler.conf
	
	I0216 09:57:20.398525   19823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 09:57:20.413646   19823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 09:57:20.428877   19823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 09:57:20.443467   19823 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:20.443536   19823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 09:57:20.458110   19823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 09:57:20.472817   19823 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:57:20.472879   19823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 09:57:20.488017   19823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 09:57:20.503106   19823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 09:57:20.503120   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:20.560410   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:21.134343   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:21.272461   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:21.333861   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:21.451017   19823 api_server.go:52] waiting for apiserver process to appear ...
	I0216 09:57:21.451114   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:57:21.951233   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:57:22.451932   19823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:57:22.533939   19823 api_server.go:72] duration metric: took 1.082897491s to wait for apiserver process to appear ...
	I0216 09:57:22.533964   19823 api_server.go:88] waiting for apiserver healthz status ...
	I0216 09:57:22.533990   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:22.535298   19823 api_server.go:269] stopped: https://127.0.0.1:54213/healthz: Get "https://127.0.0.1:54213/healthz": EOF
	I0216 09:57:23.034550   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:25.422951   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 09:57:25.422987   19823 api_server.go:103] status: https://127.0.0.1:54213/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 09:57:25.423000   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:25.432540   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 09:57:25.432567   19823 api_server.go:103] status: https://127.0.0.1:54213/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 09:57:25.534245   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:25.629486   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:57:25.629523   19823 api_server.go:103] status: https://127.0.0.1:54213/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:57:26.034799   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:26.041019   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:57:26.041037   19823 api_server.go:103] status: https://127.0.0.1:54213/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:57:26.534475   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:26.541114   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 09:57:26.541131   19823 api_server.go:103] status: https://127.0.0.1:54213/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 09:57:27.035063   19823 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54213/healthz ...
	I0216 09:57:27.112855   19823 api_server.go:279] https://127.0.0.1:54213/healthz returned 200:
	ok
	I0216 09:57:27.123710   19823 api_server.go:141] control plane version: v1.28.4
	I0216 09:57:27.123738   19823 api_server.go:131] duration metric: took 4.589676277s to wait for apiserver health ...
	I0216 09:57:27.123753   19823 cni.go:84] Creating CNI manager for ""
	I0216 09:57:27.123770   19823 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 09:57:27.147064   19823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 09:57:27.168294   19823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 09:57:27.239951   19823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 09:57:27.429215   19823 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 09:57:27.442677   19823 system_pods.go:59] 8 kube-system pods found
	I0216 09:57:27.442709   19823 system_pods.go:61] "coredns-5dd5756b68-92qw4" [e927d746-650b-481c-9ded-890c2733bfe5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 09:57:27.442721   19823 system_pods.go:61] "etcd-embed-certs-944000" [2fcf6e17-42db-4cb1-a710-8f94c1e06aeb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 09:57:27.442735   19823 system_pods.go:61] "kube-apiserver-embed-certs-944000" [9e5996a0-ced5-481d-8638-b415cfcdda14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 09:57:27.442747   19823 system_pods.go:61] "kube-controller-manager-embed-certs-944000" [fd1d5319-6eb5-4db8-8c61-694b365f689c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 09:57:27.442761   19823 system_pods.go:61] "kube-proxy-6n2sw" [99ef0054-eac5-4445-bac8-2c4e599d3aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 09:57:27.442775   19823 system_pods.go:61] "kube-scheduler-embed-certs-944000" [7970f854-51d6-4d04-bcc3-3ec10fdc2fcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 09:57:27.442788   19823 system_pods.go:61] "metrics-server-57f55c9bc5-4zqzx" [54d7a8be-185a-4f30-8a59-ad3aa67d5cb7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 09:57:27.442805   19823 system_pods.go:61] "storage-provisioner" [825845c1-76a7-45cf-9f05-c83acb5c134f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 09:57:27.442811   19823 system_pods.go:74] duration metric: took 13.581743ms to wait for pod list to return data ...
	I0216 09:57:27.442819   19823 node_conditions.go:102] verifying NodePressure condition ...
	I0216 09:57:27.514849   19823 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 09:57:27.514878   19823 node_conditions.go:123] node cpu capacity is 12
	I0216 09:57:27.514904   19823 node_conditions.go:105] duration metric: took 72.071939ms to run NodePressure ...
	I0216 09:57:27.514971   19823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 09:57:28.154056   19823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0216 09:57:28.158407   19823 kubeadm.go:787] kubelet initialised
	I0216 09:57:28.158421   19823 kubeadm.go:788] duration metric: took 4.350151ms waiting for restarted kubelet to initialise ...
	I0216 09:57:28.158428   19823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 09:57:28.164583   19823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-92qw4" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:30.170370   19823 pod_ready.go:102] pod "coredns-5dd5756b68-92qw4" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:32.170810   19823 pod_ready.go:102] pod "coredns-5dd5756b68-92qw4" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:32.671730   19823 pod_ready.go:92] pod "coredns-5dd5756b68-92qw4" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:32.671744   19823 pod_ready.go:81] duration metric: took 4.507058397s waiting for pod "coredns-5dd5756b68-92qw4" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:32.671752   19823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:33.678476   19823 pod_ready.go:92] pod "etcd-embed-certs-944000" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:33.678488   19823 pod_ready.go:81] duration metric: took 1.00670506s waiting for pod "etcd-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:33.678495   19823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:35.184550   19823 pod_ready.go:92] pod "kube-apiserver-embed-certs-944000" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:35.184562   19823 pod_ready.go:81] duration metric: took 1.506032556s waiting for pod "kube-apiserver-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:35.184569   19823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:38.646280   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:57:38.646430   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:57:37.192843   19823 pod_ready.go:102] pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:39.691021   19823 pod_ready.go:102] pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:41.691397   19823 pod_ready.go:102] pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:42.191022   19823 pod_ready.go:92] pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:42.191035   19823 pod_ready.go:81] duration metric: took 7.006316952s waiting for pod "kube-controller-manager-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:42.191046   19823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n2sw" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:42.195961   19823 pod_ready.go:92] pod "kube-proxy-6n2sw" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:42.195973   19823 pod_ready.go:81] duration metric: took 4.92172ms waiting for pod "kube-proxy-6n2sw" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:42.195981   19823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:42.200794   19823 pod_ready.go:92] pod "kube-scheduler-embed-certs-944000" in "kube-system" namespace has status "Ready":"True"
	I0216 09:57:42.200804   19823 pod_ready.go:81] duration metric: took 4.816896ms waiting for pod "kube-scheduler-embed-certs-944000" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:42.200811   19823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace to be "Ready" ...
	I0216 09:57:44.207584   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:46.706482   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:48.707139   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:51.207635   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:53.207722   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:55.708403   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:57:58.207179   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:00.706795   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:03.208910   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:05.707432   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:08.206659   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:10.209158   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:12.707291   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:14.708398   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:18.649737   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:58:18.649899   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:58:18.649910   19385 kubeadm.go:322] 
	I0216 09:58:18.649942   19385 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 09:58:18.649979   19385 kubeadm.go:322] 	timed out waiting for the condition
	I0216 09:58:18.649991   19385 kubeadm.go:322] 
	I0216 09:58:18.650018   19385 kubeadm.go:322] This error is likely caused by:
	I0216 09:58:18.650043   19385 kubeadm.go:322] 	- The kubelet is not running
	I0216 09:58:18.650122   19385 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 09:58:18.650129   19385 kubeadm.go:322] 
	I0216 09:58:18.650202   19385 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 09:58:18.650224   19385 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 09:58:18.650249   19385 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 09:58:18.650252   19385 kubeadm.go:322] 
	I0216 09:58:18.650338   19385 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 09:58:18.650412   19385 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 09:58:18.650481   19385 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 09:58:18.650528   19385 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 09:58:18.650590   19385 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 09:58:18.650616   19385 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 09:58:18.653477   19385 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 09:58:18.653568   19385 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 09:58:18.653737   19385 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 09:58:18.653828   19385 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 09:58:18.653909   19385 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 09:58:18.653974   19385 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 09:58:18.654044   19385 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 09:58:18.654072   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 09:58:19.076900   19385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:58:19.094331   19385 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 09:58:19.114074   19385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 09:58:19.129946   19385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 09:58:19.129970   19385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 09:58:19.191424   19385 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 09:58:19.191469   19385 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 09:58:19.551196   19385 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 09:58:19.551284   19385 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 09:58:19.551381   19385 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 09:58:19.720240   19385 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 09:58:19.720963   19385 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 09:58:19.727407   19385 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 09:58:19.798675   19385 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 09:58:19.820119   19385 out.go:204]   - Generating certificates and keys ...
	I0216 09:58:19.820189   19385 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 09:58:19.820253   19385 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 09:58:19.820322   19385 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 09:58:19.820370   19385 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 09:58:19.820428   19385 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 09:58:19.820477   19385 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 09:58:19.820534   19385 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 09:58:19.820587   19385 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 09:58:19.820652   19385 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 09:58:19.820713   19385 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 09:58:19.820745   19385 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 09:58:19.820788   19385 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 09:58:19.957136   19385 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 09:58:20.279617   19385 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 09:58:20.368731   19385 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 09:58:20.443289   19385 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 09:58:20.444168   19385 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 09:58:17.207716   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:19.207927   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:20.465940   19385 out.go:204]   - Booting up control plane ...
	I0216 09:58:20.466059   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 09:58:20.466137   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 09:58:20.466194   19385 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 09:58:20.466263   19385 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 09:58:20.466377   19385 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 09:58:21.210001   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:23.708676   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:26.208453   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:28.709747   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:31.208344   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:33.208835   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:35.708445   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:38.208181   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:40.210836   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:42.707863   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:44.708574   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:46.710688   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:49.207884   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:51.208155   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:53.707855   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:55.709056   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:58:58.208648   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:00.210006   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:00.456106   19385 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 09:59:00.456612   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:00.456763   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:02.210209   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:04.711378   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:05.458120   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:05.458270   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:07.208164   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:09.209607   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:11.708692   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:13.710584   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:15.710753   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:15.461378   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:15.461766   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:18.209253   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:20.209915   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:22.211571   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:24.710029   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:27.209164   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:29.708981   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:31.710937   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:34.209952   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:35.463321   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 09:59:35.463527   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 09:59:36.709664   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:38.709791   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:40.711751   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:43.209140   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:45.710390   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:47.712149   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:50.211060   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:52.710603   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:55.211758   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:57.212557   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 09:59:59.709986   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:01.710974   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:04.209900   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:06.211281   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:08.709981   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:10.710525   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:15.465026   19385 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0216 10:00:15.465189   19385 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0216 10:00:15.465200   19385 kubeadm.go:322] 
	I0216 10:00:15.465234   19385 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 10:00:15.465267   19385 kubeadm.go:322] 	timed out waiting for the condition
	I0216 10:00:15.465272   19385 kubeadm.go:322] 
	I0216 10:00:15.465304   19385 kubeadm.go:322] This error is likely caused by:
	I0216 10:00:15.465335   19385 kubeadm.go:322] 	- The kubelet is not running
	I0216 10:00:15.465416   19385 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 10:00:15.465421   19385 kubeadm.go:322] 
	I0216 10:00:15.465496   19385 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 10:00:15.465526   19385 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 10:00:15.465551   19385 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 10:00:15.465556   19385 kubeadm.go:322] 
	I0216 10:00:15.465636   19385 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 10:00:15.465707   19385 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 10:00:15.465782   19385 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 10:00:15.465826   19385 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 10:00:15.465891   19385 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 10:00:15.465921   19385 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 10:00:15.470210   19385 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 10:00:15.470273   19385 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0216 10:00:15.470372   19385 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 10:00:15.470459   19385 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 10:00:15.470534   19385 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 10:00:15.470604   19385 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 10:00:15.470627   19385 kubeadm.go:406] StartCluster complete in 8m6.098704701s
	I0216 10:00:15.470719   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 10:00:15.488039   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.488052   19385 logs.go:278] No container was found matching "kube-apiserver"
	I0216 10:00:15.488122   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 10:00:15.508099   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.508117   19385 logs.go:278] No container was found matching "etcd"
	I0216 10:00:15.508194   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 10:00:15.533356   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.533372   19385 logs.go:278] No container was found matching "coredns"
	I0216 10:00:15.533514   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 10:00:15.563458   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.563477   19385 logs.go:278] No container was found matching "kube-scheduler"
	I0216 10:00:15.563548   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 10:00:15.582274   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.582288   19385 logs.go:278] No container was found matching "kube-proxy"
	I0216 10:00:15.582358   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 10:00:15.624621   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.624636   19385 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 10:00:15.624703   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 10:00:15.644158   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.644172   19385 logs.go:278] No container was found matching "kindnet"
	I0216 10:00:15.644238   19385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 10:00:15.663073   19385 logs.go:276] 0 containers: []
	W0216 10:00:15.663087   19385 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 10:00:15.663095   19385 logs.go:123] Gathering logs for Docker ...
	I0216 10:00:15.663102   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 10:00:15.685020   19385 logs.go:123] Gathering logs for container status ...
	I0216 10:00:15.685036   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 10:00:15.750347   19385 logs.go:123] Gathering logs for kubelet ...
	I0216 10:00:15.750361   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0216 10:00:15.796119   19385 logs.go:123] Gathering logs for dmesg ...
	I0216 10:00:15.796134   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 10:00:15.816457   19385 logs.go:123] Gathering logs for describe nodes ...
	I0216 10:00:15.816475   19385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 10:00:15.883186   19385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0216 10:00:15.883203   19385 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 10:00:15.883259   19385 out.go:239] * 
	W0216 10:00:15.883291   19385 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 10:00:15.883307   19385 out.go:239] * 
	W0216 10:00:15.884183   19385 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 10:00:15.947677   19385 out.go:177] 
	I0216 10:00:13.209291   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	I0216 10:00:15.210498   19823 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4zqzx" in "kube-system" namespace has status "Ready":"False"
	W0216 10:00:15.990622   19385 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 10:00:15.990676   19385 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 10:00:15.990745   19385 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 10:00:16.054560   19385 out.go:177] 
	
	
	==> Docker <==
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.343496291Z" level=info msg="Loading containers: start."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.435583152Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.472401887Z" level=info msg="Loading containers: done."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480552174Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480629800Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.499819622Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.500020070Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:51:56 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.136295167Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137126387Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137685126Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.196506381Z" level=info msg="Starting up"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.719869752Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.932365156Z" level=info msg="Loading containers: start."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.053011191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.091165695Z" level=info msg="Loading containers: done."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099557844Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099621366Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119782992Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119947117Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:52:06 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-16T18:00:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 18:00:18 up  1:19,  0 users,  load average: 5.98, 5.32, 5.20
	Linux old-k8s-version-356000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 18:00:16 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 147.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: I0216 18:00:17.338162   19265 server.go:410] Version: v1.16.0
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: I0216 18:00:17.338387   19265 plugins.go:100] No cloud provider specified.
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: I0216 18:00:17.338397   19265 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: I0216 18:00:17.340279   19265 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: W0216 18:00:17.341115   19265 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: W0216 18:00:17.341182   19265 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:00:17 old-k8s-version-356000 kubelet[19265]: F0216 18:00:17.341205   19265 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 148.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:00:17 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: I0216 18:00:18.052311   19388 server.go:410] Version: v1.16.0
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: I0216 18:00:18.052536   19388 plugins.go:100] No cloud provider specified.
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: I0216 18:00:18.052546   19388 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: I0216 18:00:18.055699   19388 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: W0216 18:00:18.056241   19388 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: W0216 18:00:18.056299   19388 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:00:18 old-k8s-version-356000 kubelet[19388]: F0216 18:00:18.056322   19388 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:00:18 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:00:18 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (424.784342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (509.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:00:34.804544    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:00:46.743705    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:01:15.134368    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:01:57.851111    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:02:05.118163    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 10:02:08.665452    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:03:01.538084    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:03:51.294478    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:03:59.753616    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:04:24.821493    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:04:52.509442    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 10:04:56.394041    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 10:04:56.447039    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:05:10.021666    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 10:05:11.950990    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:05:22.799580    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:05:34.811820    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:06:15.141047    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
E0216 10:06:19.507502    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:06:35.003583    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:07:05.124012    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:07:14.971612    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:07:38.187641    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:08:01.543648    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:08:33.287712    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:08:46.984124    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:08:51.299967    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 10:08:59.758388    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (542.792807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:51:50.249201454Z",
	            "FinishedAt": "2024-02-16T17:51:47.463182294Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3796cb96e0afd4653a016009a08ea7784172e6af1b37db6d9e51767cab847db4",
	            "SandboxKey": "/var/run/docker/netns/3796cb96e0af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "90b836fe9f235eb417d06d2677831883e0644a25bed3bcd671f8e46a12d2f8a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (464.202051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25: (1.541552495s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-356000        | old-k8s-version-356000       | jenkins | v1.32.0 | 16 Feb 24 09:50 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-356000                              | old-k8s-version-356000       | jenkins | v1.32.0 | 16 Feb 24 09:51 PST | 16 Feb 24 09:51 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-356000             | old-k8s-version-356000       | jenkins | v1.32.0 | 16 Feb 24 09:51 PST | 16 Feb 24 09:51 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-356000                              | old-k8s-version-356000       | jenkins | v1.32.0 | 16 Feb 24 09:51 PST |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | no-preload-575000 image list                           | no-preload-575000            | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-575000                                   | no-preload-575000            | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-575000                                   | no-preload-575000            | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-575000                                   | no-preload-575000            | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	| delete  | -p no-preload-575000                                   | no-preload-575000            | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:55 PST |
	| start   | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 09:55 PST | 16 Feb 24 09:56 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-944000            | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 09:56 PST | 16 Feb 24 09:56 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 09:56 PST | 16 Feb 24 09:57 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-944000                 | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 09:57 PST | 16 Feb 24 09:57 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 09:57 PST | 16 Feb 24 10:02 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | embed-certs-944000 image list                          | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	| delete  | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	| delete  | -p                                                     | disable-driver-mounts-835000 | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | disable-driver-mounts-835000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:03 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768000  | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768000       | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 10:03:43
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 10:03:43.213266   20300 out.go:291] Setting OutFile to fd 1 ...
	I0216 10:03:43.213529   20300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 10:03:43.213536   20300 out.go:304] Setting ErrFile to fd 2...
	I0216 10:03:43.213540   20300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 10:03:43.213718   20300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 10:03:43.215111   20300 out.go:298] Setting JSON to false
	I0216 10:03:43.238266   20300 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5594,"bootTime":1708101029,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 10:03:43.238365   20300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 10:03:43.260661   20300 out.go:177] * [default-k8s-diff-port-768000] minikube v1.32.0 on Darwin 14.3.1
	I0216 10:03:43.303352   20300 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 10:03:43.303421   20300 notify.go:220] Checking for updates...
	I0216 10:03:43.347176   20300 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:03:43.369350   20300 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 10:03:43.390397   20300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 10:03:43.411165   20300 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 10:03:43.432553   20300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 10:03:43.454920   20300 config.go:182] Loaded profile config "default-k8s-diff-port-768000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 10:03:43.455525   20300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 10:03:43.510945   20300 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 10:03:43.511116   20300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 10:03:43.619162   20300 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 18:03:43.608186494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 10:03:43.661818   20300 out.go:177] * Using the docker driver based on existing profile
	I0216 10:03:43.682890   20300 start.go:299] selected driver: docker
	I0216 10:03:43.682915   20300 start.go:903] validating driver "docker" against &{Name:default-k8s-diff-port-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-768000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:03:43.683063   20300 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 10:03:43.686531   20300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 10:03:43.793856   20300 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 18:03:43.782632537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 10:03:43.794097   20300 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 10:03:43.794169   20300 cni.go:84] Creating CNI manager for ""
	I0216 10:03:43.794191   20300 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:03:43.794210   20300 start_flags.go:323] config:
	{Name:default-k8s-diff-port-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-768000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:03:43.815417   20300 out.go:177] * Starting control plane node default-k8s-diff-port-768000 in cluster default-k8s-diff-port-768000
	I0216 10:03:43.875655   20300 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 10:03:43.899679   20300 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 10:03:43.942592   20300 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 10:03:43.942679   20300 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 10:03:43.942702   20300 cache.go:56] Caching tarball of preloaded images
	I0216 10:03:43.942690   20300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 10:03:43.942914   20300 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 10:03:43.942936   20300 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 10:03:43.943781   20300 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/config.json ...
	I0216 10:03:43.995506   20300 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 10:03:43.995523   20300 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 10:03:43.995541   20300 cache.go:194] Successfully downloaded all kic artifacts
	I0216 10:03:43.995585   20300 start.go:365] acquiring machines lock for default-k8s-diff-port-768000: {Name:mk42b822e5fececc265d3e8ba831f778b8378128 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 10:03:43.995679   20300 start.go:369] acquired machines lock for "default-k8s-diff-port-768000" in 73.856µs
	I0216 10:03:43.995702   20300 start.go:96] Skipping create...Using existing machine configuration
	I0216 10:03:43.995712   20300 fix.go:54] fixHost starting: 
	I0216 10:03:43.995934   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:03:44.045860   20300 fix.go:102] recreateIfNeeded on default-k8s-diff-port-768000: state=Stopped err=<nil>
	W0216 10:03:44.045892   20300 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 10:03:44.067956   20300 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-768000" ...
	I0216 10:03:44.111751   20300 cli_runner.go:164] Run: docker start default-k8s-diff-port-768000
	I0216 10:03:44.370963   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:03:44.427413   20300 kic.go:430] container "default-k8s-diff-port-768000" state is running.
	I0216 10:03:44.428015   20300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-768000
	I0216 10:03:44.488580   20300 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/config.json ...
	I0216 10:03:44.489202   20300 machine.go:88] provisioning docker machine ...
	I0216 10:03:44.489230   20300 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-768000"
	I0216 10:03:44.489323   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:44.563817   20300 main.go:141] libmachine: Using SSH client type: native
	I0216 10:03:44.564473   20300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54576 <nil> <nil>}
	I0216 10:03:44.564495   20300 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-768000 && echo "default-k8s-diff-port-768000" | sudo tee /etc/hostname
	I0216 10:03:44.565920   20300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 10:03:47.724739   20300 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-768000
	
	I0216 10:03:47.724842   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:47.784966   20300 main.go:141] libmachine: Using SSH client type: native
	I0216 10:03:47.785355   20300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54576 <nil> <nil>}
	I0216 10:03:47.785373   20300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-768000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-768000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-768000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 10:03:47.924240   20300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 10:03:47.924261   20300 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 10:03:47.924284   20300 ubuntu.go:177] setting up certificates
	I0216 10:03:47.924293   20300 provision.go:83] configureAuth start
	I0216 10:03:47.924362   20300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-768000
	I0216 10:03:47.975824   20300 provision.go:138] copyHostCerts
	I0216 10:03:47.975930   20300 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 10:03:47.975938   20300 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 10:03:47.976058   20300 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 10:03:47.976289   20300 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 10:03:47.976295   20300 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 10:03:47.976376   20300 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 10:03:47.976544   20300 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 10:03:47.976556   20300 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 10:03:47.976627   20300 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 10:03:47.976790   20300 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-768000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-768000]
	I0216 10:03:48.199834   20300 provision.go:172] copyRemoteCerts
	I0216 10:03:48.199900   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 10:03:48.199994   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:48.253951   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:03:48.354465   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0216 10:03:48.394650   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 10:03:48.434582   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 10:03:48.476340   20300 provision.go:86] duration metric: configureAuth took 552.02147ms
	I0216 10:03:48.476398   20300 ubuntu.go:193] setting minikube options for container-runtime
	I0216 10:03:48.476579   20300 config.go:182] Loaded profile config "default-k8s-diff-port-768000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 10:03:48.476682   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:48.534254   20300 main.go:141] libmachine: Using SSH client type: native
	I0216 10:03:48.534562   20300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54576 <nil> <nil>}
	I0216 10:03:48.534571   20300 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 10:03:48.671760   20300 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 10:03:48.671776   20300 ubuntu.go:71] root file system type: overlay
	I0216 10:03:48.671865   20300 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 10:03:48.671944   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:48.724515   20300 main.go:141] libmachine: Using SSH client type: native
	I0216 10:03:48.724804   20300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54576 <nil> <nil>}
	I0216 10:03:48.724852   20300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 10:03:48.883680   20300 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 10:03:48.883811   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:48.936118   20300 main.go:141] libmachine: Using SSH client type: native
	I0216 10:03:48.936412   20300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54576 <nil> <nil>}
	I0216 10:03:48.936425   20300 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 10:03:49.081728   20300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 10:03:49.081749   20300 machine.go:91] provisioned docker machine in 4.592447297s
	I0216 10:03:49.081759   20300 start.go:300] post-start starting for "default-k8s-diff-port-768000" (driver="docker")
	I0216 10:03:49.081768   20300 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 10:03:49.081831   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 10:03:49.081884   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:49.133951   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:03:49.237606   20300 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 10:03:49.242273   20300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 10:03:49.242298   20300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 10:03:49.242305   20300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 10:03:49.242311   20300 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 10:03:49.242320   20300 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 10:03:49.242427   20300 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 10:03:49.242697   20300 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 10:03:49.242986   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 10:03:49.260947   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 10:03:49.305051   20300 start.go:303] post-start completed in 223.278193ms
	I0216 10:03:49.305137   20300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 10:03:49.305201   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:49.359223   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:03:49.452317   20300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 10:03:49.457124   20300 fix.go:56] fixHost completed within 5.461306073s
	I0216 10:03:49.457136   20300 start.go:83] releasing machines lock for "default-k8s-diff-port-768000", held for 5.46134273s
	I0216 10:03:49.457219   20300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-768000
	I0216 10:03:49.509242   20300 ssh_runner.go:195] Run: cat /version.json
	I0216 10:03:49.509242   20300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 10:03:49.509327   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:49.509357   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:49.565546   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:03:49.565825   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:03:49.769519   20300 ssh_runner.go:195] Run: systemctl --version
	I0216 10:03:49.774630   20300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 10:03:49.779536   20300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 10:03:49.809552   20300 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 10:03:49.809623   20300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 10:03:49.824854   20300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0216 10:03:49.824870   20300 start.go:475] detecting cgroup driver to use...
	I0216 10:03:49.824893   20300 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 10:03:49.825026   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 10:03:49.852535   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 10:03:49.868652   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 10:03:49.885232   20300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 10:03:49.885319   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 10:03:49.901110   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 10:03:49.919281   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 10:03:49.935510   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 10:03:49.952171   20300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 10:03:49.967540   20300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 10:03:49.984080   20300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 10:03:50.000584   20300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 10:03:50.017240   20300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:03:50.088795   20300 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 10:03:50.175176   20300 start.go:475] detecting cgroup driver to use...
	I0216 10:03:50.175200   20300 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 10:03:50.175266   20300 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 10:03:50.194474   20300 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 10:03:50.194552   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 10:03:50.214076   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 10:03:50.245641   20300 ssh_runner.go:195] Run: which cri-dockerd
	I0216 10:03:50.251288   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 10:03:50.271422   20300 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 10:03:50.353391   20300 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 10:03:50.453561   20300 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 10:03:50.518622   20300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 10:03:50.518710   20300 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 10:03:50.571258   20300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:03:50.631970   20300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 10:03:50.937991   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 10:03:50.955518   20300 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 10:03:50.973446   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 10:03:50.991214   20300 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 10:03:51.058915   20300 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 10:03:51.123249   20300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:03:51.189134   20300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 10:03:51.227326   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 10:03:51.244154   20300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:03:51.309270   20300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 10:03:51.398024   20300 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 10:03:51.398139   20300 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 10:03:51.402814   20300 start.go:543] Will wait 60s for crictl version
	I0216 10:03:51.402877   20300 ssh_runner.go:195] Run: which crictl
	I0216 10:03:51.407276   20300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 10:03:51.460428   20300 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 10:03:51.460514   20300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 10:03:51.482886   20300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 10:03:51.553499   20300 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0216 10:03:51.553617   20300 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-768000 dig +short host.docker.internal
	I0216 10:03:51.670871   20300 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 10:03:51.670975   20300 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 10:03:51.675922   20300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 10:03:51.693367   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:51.746585   20300 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 10:03:51.746672   20300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 10:03:51.764107   20300 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 10:03:51.764137   20300 docker.go:615] Images already preloaded, skipping extraction
	I0216 10:03:51.764223   20300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 10:03:51.781642   20300 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 10:03:51.781661   20300 cache_images.go:84] Images are preloaded, skipping loading
	I0216 10:03:51.781734   20300 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 10:03:51.828910   20300 cni.go:84] Creating CNI manager for ""
	I0216 10:03:51.828931   20300 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:03:51.828952   20300 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 10:03:51.828969   20300 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-768000 NodeName:default-k8s-diff-port-768000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 10:03:51.829121   20300 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-768000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 10:03:51.829193   20300 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-768000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-768000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0216 10:03:51.829251   20300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0216 10:03:51.844814   20300 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 10:03:51.844892   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 10:03:51.861611   20300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0216 10:03:51.891198   20300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 10:03:51.919853   20300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0216 10:03:51.950049   20300 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 10:03:51.955149   20300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 10:03:51.972307   20300 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000 for IP: 192.168.67.2
	I0216 10:03:51.972350   20300 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:03:51.972528   20300 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 10:03:51.972597   20300 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 10:03:51.972717   20300 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.key
	I0216 10:03:51.972831   20300 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/apiserver.key.c7fa3a9e
	I0216 10:03:51.972961   20300 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/proxy-client.key
	I0216 10:03:51.973209   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 10:03:51.973253   20300 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 10:03:51.973262   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 10:03:51.973294   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 10:03:51.973325   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 10:03:51.973358   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 10:03:51.973426   20300 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 10:03:51.974023   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 10:03:52.015497   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 10:03:52.056580   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 10:03:52.097757   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 10:03:52.138660   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 10:03:52.179888   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 10:03:52.221595   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 10:03:52.268672   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 10:03:52.314393   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 10:03:52.354973   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 10:03:52.396451   20300 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 10:03:52.438047   20300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 10:03:52.467860   20300 ssh_runner.go:195] Run: openssl version
	I0216 10:03:52.473999   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 10:03:52.489826   20300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 10:03:52.495948   20300 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 10:03:52.496043   20300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 10:03:52.504317   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 10:03:52.521552   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 10:03:52.537685   20300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:03:52.542115   20300 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:03:52.542156   20300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:03:52.549184   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 10:03:52.564463   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 10:03:52.580325   20300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 10:03:52.584431   20300 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 10:03:52.584478   20300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 10:03:52.591130   20300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 10:03:52.605846   20300 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 10:03:52.612521   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 10:03:52.619518   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 10:03:52.626019   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 10:03:52.632303   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 10:03:52.639012   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 10:03:52.645756   20300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 10:03:52.652210   20300 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-768000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:03:52.652323   20300 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 10:03:52.670115   20300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 10:03:52.685745   20300 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 10:03:52.685797   20300 kubeadm.go:636] restartCluster start
	I0216 10:03:52.685872   20300 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 10:03:52.700622   20300 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:52.700740   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:03:52.755168   20300 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-768000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:03:52.755348   20300 kubeconfig.go:146] "default-k8s-diff-port-768000" context is missing from /Users/jenkins/minikube-integration/17936-1021/kubeconfig - will repair!
	I0216 10:03:52.755667   20300 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:03:52.757177   20300 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 10:03:52.772401   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:52.772484   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:52.788424   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:53.273076   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:53.273157   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:53.290351   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:53.774353   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:53.774438   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:53.792881   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:54.274539   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:54.274668   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:54.292411   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:54.773020   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:54.773211   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:54.790777   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:55.272569   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:55.272716   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:55.293217   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:55.772586   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:55.772694   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:55.789949   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:56.272593   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:56.272674   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:56.294100   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:56.774237   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:56.774389   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:56.791552   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:57.273947   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:57.274082   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:57.291515   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:57.772567   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:57.772686   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:57.790156   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:58.274632   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:58.274759   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:58.290920   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:58.772681   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:58.772839   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:58.790889   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:59.272745   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:59.272831   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:59.293684   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:03:59.774200   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:03:59.774389   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:03:59.792909   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:00.274269   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:00.274421   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:00.293636   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:00.773273   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:00.773384   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:00.792072   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:01.272638   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:01.272700   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:01.289639   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:01.774598   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:01.774756   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:01.792033   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:02.274540   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:02.274638   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:02.296103   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:02.773689   20300 api_server.go:166] Checking apiserver status ...
	I0216 10:04:02.773925   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:04:02.791559   20300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:02.791577   20300 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 10:04:02.791591   20300 kubeadm.go:1135] stopping kube-system containers ...
	I0216 10:04:02.791659   20300 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 10:04:02.809319   20300 docker.go:483] Stopping containers: [fc65075cca43 50c990762477 29f24ad6cdca 168acce50dd5 860c2b379b26 a90778262e8c 00bb4349398f b5ecec147b18 5ed48ae7180d c59b5618f93b 5c9a051580a2 434c15e93b9a 1adfc26a72c9 8bfabe4588e2 40809ba33033]
	I0216 10:04:02.809406   20300 ssh_runner.go:195] Run: docker stop fc65075cca43 50c990762477 29f24ad6cdca 168acce50dd5 860c2b379b26 a90778262e8c 00bb4349398f b5ecec147b18 5ed48ae7180d c59b5618f93b 5c9a051580a2 434c15e93b9a 1adfc26a72c9 8bfabe4588e2 40809ba33033
	I0216 10:04:02.826935   20300 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 10:04:02.844655   20300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 10:04:02.859260   20300 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 16 18:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 16 18:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 16 18:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 16 18:02 /etc/kubernetes/scheduler.conf
	
	I0216 10:04:02.859317   20300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0216 10:04:02.874861   20300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0216 10:04:02.889836   20300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0216 10:04:02.904543   20300 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:02.904609   20300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 10:04:02.919141   20300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0216 10:04:02.933696   20300 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:04:02.933764   20300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 10:04:02.948452   20300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 10:04:02.963394   20300 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 10:04:02.963408   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:03.019877   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:03.388869   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:03.519036   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:03.578895   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:03.673985   20300 api_server.go:52] waiting for apiserver process to appear ...
	I0216 10:04:03.674102   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:04:04.174284   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:04:04.674538   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:04:04.764200   20300 api_server.go:72] duration metric: took 1.090193763s to wait for apiserver process to appear ...
	I0216 10:04:04.764221   20300 api_server.go:88] waiting for apiserver healthz status ...
	I0216 10:04:04.764253   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:04.765896   20300 api_server.go:269] stopped: https://127.0.0.1:54580/healthz: Get "https://127.0.0.1:54580/healthz": EOF
	I0216 10:04:05.264523   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:08.059677   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 10:04:08.059704   20300 api_server.go:103] status: https://127.0.0.1:54580/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 10:04:08.059718   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:08.152281   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 10:04:08.152302   20300 api_server.go:103] status: https://127.0.0.1:54580/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 10:04:08.264449   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:08.348456   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 10:04:08.348480   20300 api_server.go:103] status: https://127.0.0.1:54580/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 10:04:08.765277   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:08.771650   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 10:04:08.771670   20300 api_server.go:103] status: https://127.0.0.1:54580/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 10:04:09.265173   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:04:09.272430   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 200:
	ok
	I0216 10:04:09.351120   20300 api_server.go:141] control plane version: v1.28.4
	I0216 10:04:09.351140   20300 api_server.go:131] duration metric: took 4.58682159s to wait for apiserver health ...
	I0216 10:04:09.351149   20300 cni.go:84] Creating CNI manager for ""
	I0216 10:04:09.351160   20300 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:04:09.376628   20300 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 10:04:09.399235   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 10:04:09.467306   20300 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 10:04:09.654363   20300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 10:04:09.667215   20300 system_pods.go:59] 8 kube-system pods found
	I0216 10:04:09.667240   20300 system_pods.go:61] "coredns-5dd5756b68-7pp6m" [bd4e5a6b-c129-461c-a57c-16caf2e41c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 10:04:09.667247   20300 system_pods.go:61] "etcd-default-k8s-diff-port-768000" [62fcfaa4-d450-471c-8637-3994dea31a92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 10:04:09.667254   20300 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768000" [315a2be4-6893-474b-92fc-6cdbf0b571ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 10:04:09.667270   20300 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768000" [5fe6dfec-66a1-492f-a7b9-f7092e1843b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 10:04:09.667281   20300 system_pods.go:61] "kube-proxy-tjc47" [bf97d94f-5e7b-4f12-91ca-10e06d588870] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 10:04:09.667289   20300 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768000" [55ba9f80-5088-45ca-9491-cbd9f6f176fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 10:04:09.667297   20300 system_pods.go:61] "metrics-server-57f55c9bc5-gw2mw" [695266bb-6047-4ce1-89d0-c658e45f12da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 10:04:09.667304   20300 system_pods.go:61] "storage-provisioner" [83c5405d-0bb6-4827-a75f-c20e2a87d666] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 10:04:09.667311   20300 system_pods.go:74] duration metric: took 12.928478ms to wait for pod list to return data ...
	I0216 10:04:09.667322   20300 node_conditions.go:102] verifying NodePressure condition ...
	I0216 10:04:09.745882   20300 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 10:04:09.745900   20300 node_conditions.go:123] node cpu capacity is 12
	I0216 10:04:09.745913   20300 node_conditions.go:105] duration metric: took 78.585263ms to run NodePressure ...
	I0216 10:04:09.745928   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:04:10.483663   20300 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0216 10:04:10.545507   20300 kubeadm.go:787] kubelet initialised
	I0216 10:04:10.545520   20300 kubeadm.go:788] duration metric: took 61.840416ms waiting for restarted kubelet to initialise ...
	I0216 10:04:10.545527   20300 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 10:04:10.558838   20300 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:12.565936   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:15.065592   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:17.066362   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:19.566713   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:21.567473   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:24.065981   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:26.066168   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:28.068186   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:30.567494   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:33.066166   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:35.067207   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:37.566421   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:40.066319   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:42.067529   20300 pod_ready.go:102] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:43.066319   20300 pod_ready.go:92] pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.066333   20300 pod_ready.go:81] duration metric: took 32.506837105s waiting for pod "coredns-5dd5756b68-7pp6m" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.066339   20300 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.070889   20300 pod_ready.go:92] pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.070898   20300 pod_ready.go:81] duration metric: took 4.553488ms waiting for pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.070905   20300 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.075525   20300 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.075535   20300 pod_ready.go:81] duration metric: took 4.625757ms waiting for pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.075545   20300 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.080107   20300 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.080117   20300 pod_ready.go:81] duration metric: took 4.566203ms waiting for pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.080123   20300 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tjc47" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.086054   20300 pod_ready.go:92] pod "kube-proxy-tjc47" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.086071   20300 pod_ready.go:81] duration metric: took 5.942012ms waiting for pod "kube-proxy-tjc47" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.086077   20300 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.463332   20300 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:04:43.463344   20300 pod_ready.go:81] duration metric: took 377.254831ms waiting for pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:43.463351   20300 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace to be "Ready" ...
	I0216 10:04:45.470307   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:47.470414   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:49.969486   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:52.469677   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:54.470207   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:56.970532   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:04:58.972551   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:01.469356   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:03.471956   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:05.970578   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:08.469452   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:10.969568   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:12.972320   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:14.972626   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:17.470511   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:19.969950   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:21.972370   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:24.470341   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:26.971952   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:29.470800   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:31.471868   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:33.973088   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:36.472072   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:38.970768   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:40.971171   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:43.471767   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:45.971526   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:48.473532   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:50.970936   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:52.971146   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:54.972207   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:56.973954   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:05:59.471278   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:01.472241   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:03.973084   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:06.471913   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:08.970944   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:10.971741   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:12.972113   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:14.972287   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:17.472716   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:19.472908   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:21.973209   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:24.472190   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:26.972034   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:29.471578   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:31.474563   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:33.972660   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:36.471412   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:38.972539   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:40.974838   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:43.474248   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:45.972881   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:47.973445   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:50.471469   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:52.472181   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:54.975417   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:57.474363   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:06:59.972965   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:01.973110   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:03.973515   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:05.973920   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:08.473051   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:10.973783   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:13.472292   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:15.473053   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:17.972516   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:20.473658   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:22.972079   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:24.974435   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:26.974719   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:29.474577   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:31.973426   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:33.973537   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:35.974405   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:38.473635   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:40.475108   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:42.973563   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:45.472908   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:47.473123   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:49.474659   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:51.973938   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:53.975446   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:56.473201   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:07:58.973653   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:01.473045   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:03.473543   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:05.476513   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:07.974364   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:09.976400   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:12.474246   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:14.474927   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:16.974588   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:19.473508   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:21.473965   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:23.976300   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:26.474509   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:28.974259   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:30.975931   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:33.473990   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:35.474590   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:37.476721   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:39.974113   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:41.975218   20300 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace has status "Ready":"False"
	I0216 10:08:43.468260   20300 pod_ready.go:81] duration metric: took 4m0.000209566s waiting for pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace to be "Ready" ...
	E0216 10:08:43.468279   20300 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-gw2mw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0216 10:08:43.468297   20300 pod_ready.go:38] duration metric: took 4m32.917431323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 10:08:43.468326   20300 kubeadm.go:640] restartCluster took 4m50.776838547s
	W0216 10:08:43.468373   20300 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0216 10:08:43.468396   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0216 10:08:50.329867   20300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.861318847s)
	I0216 10:08:50.329944   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 10:08:50.348220   20300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 10:08:50.364063   20300 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 10:08:50.364125   20300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 10:08:50.379574   20300 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 10:08:50.379601   20300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 10:08:50.427833   20300 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 10:08:50.427881   20300 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 10:08:50.553796   20300 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 10:08:50.553897   20300 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 10:08:50.554054   20300 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 10:08:50.859126   20300 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 10:08:50.886751   20300 out.go:204]   - Generating certificates and keys ...
	I0216 10:08:50.886835   20300 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 10:08:50.886896   20300 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 10:08:50.886969   20300 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 10:08:50.887032   20300 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 10:08:50.887096   20300 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 10:08:50.887150   20300 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 10:08:50.887213   20300 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 10:08:50.887281   20300 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 10:08:50.887357   20300 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 10:08:50.887428   20300 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 10:08:50.887472   20300 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 10:08:50.887527   20300 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 10:08:50.978945   20300 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 10:08:51.099093   20300 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 10:08:51.319386   20300 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 10:08:51.375457   20300 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 10:08:51.375725   20300 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 10:08:51.377608   20300 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 10:08:51.399216   20300 out.go:204]   - Booting up control plane ...
	I0216 10:08:51.399291   20300 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 10:08:51.399392   20300 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 10:08:51.399482   20300 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 10:08:51.399596   20300 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 10:08:51.399716   20300 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 10:08:51.399790   20300 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 10:08:51.469691   20300 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 10:08:56.472617   20300 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003414 seconds
	I0216 10:08:56.472720   20300 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 10:08:56.483275   20300 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 10:08:57.000650   20300 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 10:08:57.000829   20300 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-768000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 10:08:57.509228   20300 kubeadm.go:322] [bootstrap-token] Using token: z1syrk.0ae1aabfzwmwn1tt
	I0216 10:08:57.549058   20300 out.go:204]   - Configuring RBAC rules ...
	I0216 10:08:57.549371   20300 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 10:08:57.552536   20300 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 10:08:57.591793   20300 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 10:08:57.594342   20300 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 10:08:57.596938   20300 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 10:08:57.600154   20300 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 10:08:57.608176   20300 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 10:08:57.760738   20300 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 10:08:57.957365   20300 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 10:08:57.958368   20300 kubeadm.go:322] 
	I0216 10:08:57.958421   20300 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 10:08:57.958426   20300 kubeadm.go:322] 
	I0216 10:08:57.958525   20300 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 10:08:57.958536   20300 kubeadm.go:322] 
	I0216 10:08:57.958569   20300 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 10:08:57.958655   20300 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 10:08:57.958715   20300 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 10:08:57.958722   20300 kubeadm.go:322] 
	I0216 10:08:57.958781   20300 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 10:08:57.958796   20300 kubeadm.go:322] 
	I0216 10:08:57.958852   20300 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 10:08:57.958868   20300 kubeadm.go:322] 
	I0216 10:08:57.958966   20300 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 10:08:57.959095   20300 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 10:08:57.959210   20300 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 10:08:57.959220   20300 kubeadm.go:322] 
	I0216 10:08:57.959349   20300 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 10:08:57.959464   20300 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 10:08:57.959483   20300 kubeadm.go:322] 
	I0216 10:08:57.959576   20300 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token z1syrk.0ae1aabfzwmwn1tt \
	I0216 10:08:57.959706   20300 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f04862da0f135f2f63db76a0e7e00284dbb48f603bb98f1797713392a7cbadc1 \
	I0216 10:08:57.959731   20300 kubeadm.go:322] 	--control-plane 
	I0216 10:08:57.959743   20300 kubeadm.go:322] 
	I0216 10:08:57.959875   20300 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 10:08:57.959891   20300 kubeadm.go:322] 
	I0216 10:08:57.959975   20300 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token z1syrk.0ae1aabfzwmwn1tt \
	I0216 10:08:57.960118   20300 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f04862da0f135f2f63db76a0e7e00284dbb48f603bb98f1797713392a7cbadc1 
	I0216 10:08:57.966258   20300 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0216 10:08:57.966419   20300 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 10:08:57.966438   20300 cni.go:84] Creating CNI manager for ""
	I0216 10:08:57.966457   20300 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:08:58.006932   20300 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 10:08:58.028138   20300 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 10:08:58.069624   20300 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 10:08:58.104787   20300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 10:08:58.104859   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=default-k8s-diff-port-768000 minikube.k8s.io/updated_at=2024_02_16T10_08_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:08:58.104862   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:08:58.253598   20300 ops.go:34] apiserver oom_adj: -16
	I0216 10:08:58.253642   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:08:58.754565   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:08:59.254223   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:08:59.754116   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:00.254162   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:00.754092   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:01.254347   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:01.754073   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:02.254052   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:02.754221   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:03.254045   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:03.753854   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:04.254360   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:04.754474   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:05.254321   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:05.754383   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:06.253897   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:06.754457   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:07.253862   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:07.754478   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:08.254320   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:08.754476   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:09.254074   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:09.753923   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:10.254511   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:10.754244   20300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 10:09:10.857400   20300 kubeadm.go:1088] duration metric: took 12.75235335s to wait for elevateKubeSystemPrivileges.
	I0216 10:09:10.857420   20300 kubeadm.go:406] StartCluster complete in 5m18.198999886s
	I0216 10:09:10.857443   20300 settings.go:142] acquiring lock: {Name:mk797212e07e7fce370dcd397d90efd277229019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:09:10.857535   20300 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:09:10.858067   20300 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:09:10.858404   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 10:09:10.858427   20300 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 10:09:10.858469   20300 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-768000"
	I0216 10:09:10.858489   20300 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-768000"
	I0216 10:09:10.858489   20300 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-768000"
	I0216 10:09:10.858490   20300 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-768000"
	W0216 10:09:10.858497   20300 addons.go:243] addon storage-provisioner should already be in state true
	I0216 10:09:10.858501   20300 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-768000"
	I0216 10:09:10.858509   20300 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-768000"
	I0216 10:09:10.858512   20300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-768000"
	W0216 10:09:10.858519   20300 addons.go:243] addon metrics-server should already be in state true
	I0216 10:09:10.858530   20300 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-768000"
	W0216 10:09:10.858541   20300 addons.go:243] addon dashboard should already be in state true
	I0216 10:09:10.858543   20300 host.go:66] Checking if "default-k8s-diff-port-768000" exists ...
	I0216 10:09:10.858559   20300 host.go:66] Checking if "default-k8s-diff-port-768000" exists ...
	I0216 10:09:10.858572   20300 host.go:66] Checking if "default-k8s-diff-port-768000" exists ...
	I0216 10:09:10.858594   20300 config.go:182] Loaded profile config "default-k8s-diff-port-768000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 10:09:10.858830   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:09:10.858878   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:09:10.858985   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:09:10.859940   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:09:10.941771   20300 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-768000"
	W0216 10:09:10.941802   20300 addons.go:243] addon default-storageclass should already be in state true
	I0216 10:09:10.941823   20300 host.go:66] Checking if "default-k8s-diff-port-768000" exists ...
	I0216 10:09:10.966308   20300 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 10:09:10.942397   20300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-768000 --format={{.State.Status}}
	I0216 10:09:11.079271   20300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 10:09:11.014457   20300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 10:09:11.041596   20300 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0216 10:09:11.116615   20300 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 10:09:11.137594   20300 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 10:09:11.139682   20300 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 10:09:11.158280   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 10:09:11.195610   20300 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 10:09:11.195620   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 10:09:11.195627   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 10:09:11.195625   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 10:09:11.195695   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 10:09:11.195747   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:09:11.195785   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:09:11.195794   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:09:11.195809   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:09:11.280393   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:09:11.281020   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:09:11.280562   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:09:11.283612   20300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54576 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/default-k8s-diff-port-768000/id_rsa Username:docker}
	I0216 10:09:11.366943   20300 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-768000" context rescaled to 1 replicas
	I0216 10:09:11.366976   20300 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 10:09:11.390910   20300 out.go:177] * Verifying Kubernetes components...
	I0216 10:09:11.431858   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 10:09:11.679178   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 10:09:11.679195   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 10:09:11.753996   20300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 10:09:11.865724   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 10:09:11.865763   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 10:09:11.868933   20300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 10:09:11.869916   20300 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 10:09:11.869932   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 10:09:12.058500   20300 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 10:09:12.058541   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 10:09:12.062123   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 10:09:12.062175   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 10:09:12.175945   20300 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 10:09:12.175966   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 10:09:12.248294   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 10:09:12.248311   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 10:09:12.364611   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 10:09:12.364625   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 10:09:12.365179   20300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 10:09:12.478494   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 10:09:12.478509   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 10:09:12.579508   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 10:09:12.579523   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 10:09:12.759723   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 10:09:12.759748   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 10:09:12.968470   20300 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 10:09:12.968522   20300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 10:09:13.157622   20300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 10:09:13.184976   20300 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.068301189s)
	I0216 10:09:13.184990   20300 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.753070474s)
	I0216 10:09:13.185004   20300 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0216 10:09:13.185083   20300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-768000
	I0216 10:09:13.251894   20300 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-768000" to be "Ready" ...
	I0216 10:09:13.256658   20300 node_ready.go:49] node "default-k8s-diff-port-768000" has status "Ready":"True"
	I0216 10:09:13.256675   20300 node_ready.go:38] duration metric: took 4.74846ms waiting for node "default-k8s-diff-port-768000" to be "Ready" ...
	I0216 10:09:13.256684   20300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 10:09:13.266411   20300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7vts4" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.749837   20300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.995769213s)
	I0216 10:09:13.750053   20300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.881015478s)
	I0216 10:09:13.850349   20300 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7vts4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7vts4" not found
	I0216 10:09:13.850373   20300 pod_ready.go:81] duration metric: took 583.931462ms waiting for pod "coredns-5dd5756b68-7vts4" in "kube-system" namespace to be "Ready" ...
	E0216 10:09:13.850384   20300 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7vts4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7vts4" not found
	I0216 10:09:13.850391   20300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j55mb" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.861488   20300 pod_ready.go:92] pod "coredns-5dd5756b68-j55mb" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:13.861509   20300 pod_ready.go:81] duration metric: took 11.110633ms waiting for pod "coredns-5dd5756b68-j55mb" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.861521   20300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.873277   20300 pod_ready.go:92] pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:13.873294   20300 pod_ready.go:81] duration metric: took 11.765335ms waiting for pod "etcd-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.873306   20300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.954251   20300 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:13.954273   20300 pod_ready.go:81] duration metric: took 80.956571ms waiting for pod "kube-apiserver-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.954288   20300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.966713   20300 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:13.966728   20300 pod_ready.go:81] duration metric: took 12.429698ms waiting for pod "kube-controller-manager-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.966738   20300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdsv4" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:13.978863   20300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.613623307s)
	I0216 10:09:13.978897   20300 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-768000"
	I0216 10:09:14.256245   20300 pod_ready.go:92] pod "kube-proxy-cdsv4" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:14.256268   20300 pod_ready.go:81] duration metric: took 289.509629ms waiting for pod "kube-proxy-cdsv4" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:14.256282   20300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:14.655630   20300 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace has status "Ready":"True"
	I0216 10:09:14.655651   20300 pod_ready.go:81] duration metric: took 399.353043ms waiting for pod "kube-scheduler-default-k8s-diff-port-768000" in "kube-system" namespace to be "Ready" ...
	I0216 10:09:14.655663   20300 pod_ready.go:38] duration metric: took 1.398936813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 10:09:14.655678   20300 api_server.go:52] waiting for apiserver process to appear ...
	I0216 10:09:14.655757   20300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:09:14.874490   20300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.71678978s)
	I0216 10:09:14.874504   20300 api_server.go:72] duration metric: took 3.507422029s to wait for apiserver process to appear ...
	I0216 10:09:14.874519   20300 api_server.go:88] waiting for apiserver healthz status ...
	I0216 10:09:14.874533   20300 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54580/healthz ...
	I0216 10:09:14.898116   20300 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-768000 addons enable metrics-server
	
	I0216 10:09:14.879716   20300 api_server.go:279] https://127.0.0.1:54580/healthz returned 200:
	ok
	I0216 10:09:14.958739   20300 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0216 10:09:14.939818   20300 api_server.go:141] control plane version: v1.28.4
	I0216 10:09:15.002810   20300 api_server.go:131] duration metric: took 128.274714ms to wait for apiserver health ...
	I0216 10:09:15.002813   20300 addons.go:505] enable addons completed in 4.144313167s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0216 10:09:15.002825   20300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 10:09:15.009363   20300 system_pods.go:59] 8 kube-system pods found
	I0216 10:09:15.009376   20300 system_pods.go:61] "coredns-5dd5756b68-j55mb" [adcfc855-eec1-46db-a2eb-4d8176da670e] Running
	I0216 10:09:15.009380   20300 system_pods.go:61] "etcd-default-k8s-diff-port-768000" [0a2a5f49-71ab-49ea-be8c-0232ce9ff4ae] Running
	I0216 10:09:15.009383   20300 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-768000" [0eb01031-57c7-4d5c-9084-c5dda43f90c8] Running
	I0216 10:09:15.009387   20300 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-768000" [d7a36bff-9415-4d09-aea7-3e29ca7f3618] Running
	I0216 10:09:15.009390   20300 system_pods.go:61] "kube-proxy-cdsv4" [54ac9c07-b0a0-4ad7-98cd-6761c6c4785e] Running
	I0216 10:09:15.009394   20300 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-768000" [8888ff5e-e26b-44e8-a2e9-68759a7da6f5] Running
	I0216 10:09:15.009399   20300 system_pods.go:61] "metrics-server-57f55c9bc5-8pqmt" [3910a175-8bf1-4a81-ac50-23e308651efb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 10:09:15.009404   20300 system_pods.go:61] "storage-provisioner" [2b1d7422-5aaa-468a-b348-b0505dd1b4b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 10:09:15.009410   20300 system_pods.go:74] duration metric: took 6.575914ms to wait for pod list to return data ...
	I0216 10:09:15.009415   20300 default_sa.go:34] waiting for default service account to be created ...
	I0216 10:09:15.055386   20300 default_sa.go:45] found service account: "default"
	I0216 10:09:15.055398   20300 default_sa.go:55] duration metric: took 45.977381ms for default service account to be created ...
	I0216 10:09:15.055405   20300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 10:09:15.263945   20300 system_pods.go:86] 8 kube-system pods found
	I0216 10:09:15.263965   20300 system_pods.go:89] "coredns-5dd5756b68-j55mb" [adcfc855-eec1-46db-a2eb-4d8176da670e] Running
	I0216 10:09:15.263971   20300 system_pods.go:89] "etcd-default-k8s-diff-port-768000" [0a2a5f49-71ab-49ea-be8c-0232ce9ff4ae] Running
	I0216 10:09:15.263980   20300 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-768000" [0eb01031-57c7-4d5c-9084-c5dda43f90c8] Running
	I0216 10:09:15.263993   20300 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-768000" [d7a36bff-9415-4d09-aea7-3e29ca7f3618] Running
	I0216 10:09:15.264007   20300 system_pods.go:89] "kube-proxy-cdsv4" [54ac9c07-b0a0-4ad7-98cd-6761c6c4785e] Running
	I0216 10:09:15.264013   20300 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-768000" [8888ff5e-e26b-44e8-a2e9-68759a7da6f5] Running
	I0216 10:09:15.264023   20300 system_pods.go:89] "metrics-server-57f55c9bc5-8pqmt" [3910a175-8bf1-4a81-ac50-23e308651efb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 10:09:15.264045   20300 system_pods.go:89] "storage-provisioner" [2b1d7422-5aaa-468a-b348-b0505dd1b4b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 10:09:15.264062   20300 system_pods.go:126] duration metric: took 208.645647ms to wait for k8s-apps to be running ...
	I0216 10:09:15.264069   20300 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 10:09:15.264125   20300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 10:09:15.290843   20300 system_svc.go:56] duration metric: took 26.767542ms WaitForService to wait for kubelet.
	I0216 10:09:15.290859   20300 kubeadm.go:581] duration metric: took 3.92377095s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 10:09:15.290884   20300 node_conditions.go:102] verifying NodePressure condition ...
	I0216 10:09:15.455222   20300 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 10:09:15.455238   20300 node_conditions.go:123] node cpu capacity is 12
	I0216 10:09:15.455244   20300 node_conditions.go:105] duration metric: took 164.354167ms to run NodePressure ...
	I0216 10:09:15.455252   20300 start.go:228] waiting for startup goroutines ...
	I0216 10:09:15.455257   20300 start.go:233] waiting for cluster config update ...
	I0216 10:09:15.455270   20300 start.go:242] writing updated cluster config ...
	I0216 10:09:15.455623   20300 ssh_runner.go:195] Run: rm -f paused
	I0216 10:09:15.507188   20300 start.go:601] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0216 10:09:15.530972   20300 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-768000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.343496291Z" level=info msg="Loading containers: start."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.435583152Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.472401887Z" level=info msg="Loading containers: done."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480552174Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480629800Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.499819622Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.500020070Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:51:56 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.136295167Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137126387Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137685126Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.196506381Z" level=info msg="Starting up"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.719869752Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.932365156Z" level=info msg="Loading containers: start."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.053011191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.091165695Z" level=info msg="Loading containers: done."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099557844Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099621366Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119782992Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119947117Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:52:06 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-16T18:09:21Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 18:09:21 up  1:28,  0 users,  load average: 5.13, 4.93, 5.02
	Linux old-k8s-version-356000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 18:09:19 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: I0216 18:09:19.677516   30963 server.go:410] Version: v1.16.0
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: I0216 18:09:19.677899   30963 plugins.go:100] No cloud provider specified.
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: I0216 18:09:19.677916   30963 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: I0216 18:09:19.679884   30963 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: W0216 18:09:19.680701   30963 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: W0216 18:09:19.680899   30963 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:09:19 old-k8s-version-356000 kubelet[30963]: F0216 18:09:19.680940   30963 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:09:19 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:09:19 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:09:20 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Feb 16 18:09:20 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:09:20 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: I0216 18:09:20.615297   30992 server.go:410] Version: v1.16.0
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: I0216 18:09:20.615628   30992 plugins.go:100] No cloud provider specified.
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: I0216 18:09:20.615641   30992 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: I0216 18:09:20.618170   30992 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: W0216 18:09:20.619030   30992 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: W0216 18:09:20.619108   30992 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:09:20 old-k8s-version-356000 kubelet[30992]: F0216 18:09:20.621965   30992 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:09:20 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:09:20 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:09:21 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Feb 16 18:09:21 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:09:21 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (433.896989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (392.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:09:56.451251    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:10:11.956850    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:10:34.815632    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:11:15.147958    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:12:05.129894    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:12:14.978056    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:01.558793    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:18.305413    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.311844    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.322401    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.343726    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.384634    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.494695    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.655305    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:18.975725    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:19.617564    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:20.898134    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:23.461025    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:28.582974    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:33.314987    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 10:13:38.825586    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:47.012565    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 10:13:51.328381    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:13:59.307014    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
E0216 10:13:59.787067    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:14:24.857130    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:14:40.268542    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/default-k8s-diff-port-768000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:14:56.480240    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:15:11.985680    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:15:34.845822    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54079/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0216 10:15:47.906778    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (393.868267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.881µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-356000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-356000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-356000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01",
	        "Created": "2024-02-16T17:45:56.532939996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:51:50.249201454Z",
	            "FinishedAt": "2024-02-16T17:51:47.463182294Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/hosts",
	        "LogPath": "/var/lib/docker/containers/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01/c7e40ba5a933a3fc62b7591c10e5e9c4a9dacef26179c3d4d877828d0eb3ca01-json.log",
	        "Name": "/old-k8s-version-356000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379-init/diff:/var/lib/docker/overlay2/64e9a96b4fa04416cc2f23ab4bb4beb68546d8c810a8f2c9b8ab796aea7581a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a00a90b2099718abfd0fc15d9a576cacf4ce76139e48d7d0a0d8eaa01bf4379/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3796cb96e0afd4653a016009a08ea7784172e6af1b37db6d9e51767cab847db4",
	            "SandboxKey": "/var/run/docker/netns/3796cb96e0af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7e40ba5a933",
	                        "old-k8s-version-356000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "2b231f9382e31cc79f696866baa9c7eea268e7a10c9edda380cefa5e7ba22d21",
	                    "EndpointID": "90b836fe9f235eb417d06d2677831883e0644a25bed3bcd671f8e46a12d2f8a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-356000",
	                        "c7e40ba5a933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (392.895341ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-356000 logs -n 25: (1.410650674s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	| delete  | -p embed-certs-944000                                  | embed-certs-944000           | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	| delete  | -p                                                     | disable-driver-mounts-835000 | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:02 PST |
	|         | disable-driver-mounts-835000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:02 PST | 16 Feb 24 10:03 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-768000  | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-768000       | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:03 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:03 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-768000                           | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:09 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-768000 | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:09 PST |
	|         | default-k8s-diff-port-768000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-047000 --memory=2200 --alsologtostderr   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:09 PST | 16 Feb 24 10:10 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-047000             | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-047000                                   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-047000                  | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-047000 --memory=2200 --alsologtostderr   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-047000 image list                           | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-047000                                   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:10 PST | 16 Feb 24 10:10 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-047000                                   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:11 PST | 16 Feb 24 10:11 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-047000                                   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:11 PST | 16 Feb 24 10:11 PST |
	| delete  | -p newest-cni-047000                                   | newest-cni-047000            | jenkins | v1.32.0 | 16 Feb 24 10:11 PST | 16 Feb 24 10:11 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 10:10:28
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 10:10:28.458126   20742 out.go:291] Setting OutFile to fd 1 ...
	I0216 10:10:28.458300   20742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 10:10:28.458305   20742 out.go:304] Setting ErrFile to fd 2...
	I0216 10:10:28.458309   20742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 10:10:28.459173   20742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 10:10:28.460873   20742 out.go:298] Setting JSON to false
	I0216 10:10:28.483318   20742 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5999,"bootTime":1708101029,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 10:10:28.483421   20742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 10:10:28.505544   20742 out.go:177] * [newest-cni-047000] minikube v1.32.0 on Darwin 14.3.1
	I0216 10:10:28.548133   20742 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 10:10:28.548255   20742 notify.go:220] Checking for updates...
	I0216 10:10:28.570305   20742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:10:28.593004   20742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 10:10:28.614304   20742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 10:10:28.636049   20742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 10:10:28.656974   20742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 10:10:28.678871   20742 config.go:182] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 10:10:28.679722   20742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 10:10:28.736033   20742 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 10:10:28.736216   20742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 10:10:28.840830   20742 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 18:10:28.830587417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 10:10:28.883534   20742 out.go:177] * Using the docker driver based on existing profile
	I0216 10:10:28.904559   20742 start.go:299] selected driver: docker
	I0216 10:10:28.904585   20742 start.go:903] validating driver "docker" against &{Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-047000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:10:28.904716   20742 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 10:10:28.909026   20742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 10:10:29.018156   20742 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-16 18:10:29.004813066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 10:10:29.018369   20742 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0216 10:10:29.018428   20742 cni.go:84] Creating CNI manager for ""
	I0216 10:10:29.018441   20742 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:10:29.018451   20742 start_flags.go:323] config:
	{Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:10:29.062517   20742 out.go:177] * Starting control plane node newest-cni-047000 in cluster newest-cni-047000
	I0216 10:10:29.083986   20742 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 10:10:29.105724   20742 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 10:10:29.147772   20742 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 10:10:29.147817   20742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 10:10:29.147837   20742 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 10:10:29.147852   20742 cache.go:56] Caching tarball of preloaded images
	I0216 10:10:29.148037   20742 preload.go:174] Found /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0216 10:10:29.148059   20742 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0216 10:10:29.148728   20742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/config.json ...
	I0216 10:10:29.200444   20742 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 10:10:29.200458   20742 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 10:10:29.200480   20742 cache.go:194] Successfully downloaded all kic artifacts
	I0216 10:10:29.200533   20742 start.go:365] acquiring machines lock for newest-cni-047000: {Name:mkcba300a6687a33d259f5fdeb446eed794cfa28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 10:10:29.200631   20742 start.go:369] acquired machines lock for "newest-cni-047000" in 60.798µs
	I0216 10:10:29.200650   20742 start.go:96] Skipping create...Using existing machine configuration
	I0216 10:10:29.200660   20742 fix.go:54] fixHost starting: 
	I0216 10:10:29.200952   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:29.251980   20742 fix.go:102] recreateIfNeeded on newest-cni-047000: state=Stopped err=<nil>
	W0216 10:10:29.252013   20742 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 10:10:29.273823   20742 out.go:177] * Restarting existing docker container for "newest-cni-047000" ...
	I0216 10:10:29.316516   20742 cli_runner.go:164] Run: docker start newest-cni-047000
	I0216 10:10:29.556699   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:29.611324   20742 kic.go:430] container "newest-cni-047000" state is running.
	I0216 10:10:29.611928   20742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0216 10:10:29.669763   20742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/config.json ...
	I0216 10:10:29.670219   20742 machine.go:88] provisioning docker machine ...
	I0216 10:10:29.670263   20742 ubuntu.go:169] provisioning hostname "newest-cni-047000"
	I0216 10:10:29.670376   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:29.734496   20742 main.go:141] libmachine: Using SSH client type: native
	I0216 10:10:29.734879   20742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0216 10:10:29.734894   20742 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-047000 && echo "newest-cni-047000" | sudo tee /etc/hostname
	I0216 10:10:29.736126   20742 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 10:10:32.894535   20742 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-047000
	
	I0216 10:10:32.894645   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:32.948450   20742 main.go:141] libmachine: Using SSH client type: native
	I0216 10:10:32.948738   20742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0216 10:10:32.948756   20742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-047000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-047000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-047000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 10:10:33.082295   20742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 10:10:33.082324   20742 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
	I0216 10:10:33.082345   20742 ubuntu.go:177] setting up certificates
	I0216 10:10:33.082356   20742 provision.go:83] configureAuth start
	I0216 10:10:33.082434   20742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0216 10:10:33.133240   20742 provision.go:138] copyHostCerts
	I0216 10:10:33.133345   20742 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
	I0216 10:10:33.133354   20742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
	I0216 10:10:33.133478   20742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
	I0216 10:10:33.133740   20742 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
	I0216 10:10:33.133746   20742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
	I0216 10:10:33.133815   20742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
	I0216 10:10:33.133996   20742 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
	I0216 10:10:33.134002   20742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
	I0216 10:10:33.134068   20742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
	I0216 10:10:33.134228   20742 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-047000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-047000]
	I0216 10:10:33.216928   20742 provision.go:172] copyRemoteCerts
	I0216 10:10:33.216987   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 10:10:33.217041   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:33.267989   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:33.370246   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0216 10:10:33.410245   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0216 10:10:33.451151   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 10:10:33.491596   20742 provision.go:86] duration metric: configureAuth took 409.217705ms
	I0216 10:10:33.491655   20742 ubuntu.go:193] setting minikube options for container-runtime
	I0216 10:10:33.491905   20742 config.go:182] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 10:10:33.491974   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:33.550317   20742 main.go:141] libmachine: Using SSH client type: native
	I0216 10:10:33.550632   20742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0216 10:10:33.550645   20742 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 10:10:33.689260   20742 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 10:10:33.689275   20742 ubuntu.go:71] root file system type: overlay
	I0216 10:10:33.689367   20742 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 10:10:33.689452   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:33.741123   20742 main.go:141] libmachine: Using SSH client type: native
	I0216 10:10:33.741414   20742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0216 10:10:33.741464   20742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 10:10:33.900425   20742 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 10:10:33.900534   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:33.952632   20742 main.go:141] libmachine: Using SSH client type: native
	I0216 10:10:33.952924   20742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0216 10:10:33.952940   20742 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 10:10:34.099981   20742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 10:10:34.100005   20742 machine.go:91] provisioned docker machine in 4.429690105s
	I0216 10:10:34.100017   20742 start.go:300] post-start starting for "newest-cni-047000" (driver="docker")
	I0216 10:10:34.100027   20742 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 10:10:34.100102   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 10:10:34.100159   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:34.151979   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:34.255291   20742 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 10:10:34.259383   20742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 10:10:34.259406   20742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 10:10:34.259416   20742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 10:10:34.259421   20742 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 10:10:34.259430   20742 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
	I0216 10:10:34.259519   20742 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
	I0216 10:10:34.259667   20742 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
	I0216 10:10:34.259869   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 10:10:34.276799   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
	I0216 10:10:34.322119   20742 start.go:303] post-start completed in 222.087701ms
	I0216 10:10:34.322207   20742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 10:10:34.322269   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:34.376752   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:34.470216   20742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 10:10:34.475227   20742 fix.go:56] fixHost completed within 5.274462129s
	I0216 10:10:34.475248   20742 start.go:83] releasing machines lock for "newest-cni-047000", held for 5.274503049s
	I0216 10:10:34.475330   20742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-047000
	I0216 10:10:34.527017   20742 ssh_runner.go:195] Run: cat /version.json
	I0216 10:10:34.527043   20742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 10:10:34.527091   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:34.527116   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:34.584123   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:34.584129   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:34.783361   20742 ssh_runner.go:195] Run: systemctl --version
	I0216 10:10:34.788921   20742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 10:10:34.794068   20742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 10:10:34.824099   20742 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 10:10:34.824170   20742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 10:10:34.839144   20742 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0216 10:10:34.839160   20742 start.go:475] detecting cgroup driver to use...
	I0216 10:10:34.839176   20742 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 10:10:34.839288   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 10:10:34.867448   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 10:10:34.883660   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 10:10:34.900046   20742 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 10:10:34.900116   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 10:10:34.916043   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 10:10:34.931839   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 10:10:34.947543   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 10:10:34.963438   20742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 10:10:34.979200   20742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 10:10:34.995714   20742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 10:10:35.011550   20742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 10:10:35.029356   20742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:10:35.102724   20742 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 10:10:35.189761   20742 start.go:475] detecting cgroup driver to use...
	I0216 10:10:35.189796   20742 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 10:10:35.189853   20742 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 10:10:35.208309   20742 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 10:10:35.208393   20742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 10:10:35.229543   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 10:10:35.260139   20742 ssh_runner.go:195] Run: which cri-dockerd
	I0216 10:10:35.264987   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 10:10:35.283937   20742 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 10:10:35.332208   20742 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 10:10:35.430182   20742 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 10:10:35.522362   20742 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 10:10:35.522459   20742 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 10:10:35.552737   20742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:10:35.612546   20742 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 10:10:35.920390   20742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 10:10:35.938351   20742 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 10:10:35.956681   20742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 10:10:35.975636   20742 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 10:10:36.035713   20742 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 10:10:36.096688   20742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:10:36.160745   20742 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 10:10:36.200399   20742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 10:10:36.219600   20742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 10:10:36.284741   20742 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 10:10:36.377441   20742 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 10:10:36.377528   20742 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 10:10:36.382148   20742 start.go:543] Will wait 60s for crictl version
	I0216 10:10:36.382205   20742 ssh_runner.go:195] Run: which crictl
	I0216 10:10:36.386355   20742 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 10:10:36.439046   20742 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 10:10:36.439133   20742 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 10:10:36.461640   20742 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 10:10:36.507226   20742 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0216 10:10:36.507313   20742 cli_runner.go:164] Run: docker exec -t newest-cni-047000 dig +short host.docker.internal
	I0216 10:10:36.627955   20742 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0216 10:10:36.628061   20742 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0216 10:10:36.632864   20742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 10:10:36.650487   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:36.726619   20742 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0216 10:10:36.748420   20742 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 10:10:36.748549   20742 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 10:10:36.767734   20742 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 10:10:36.767756   20742 docker.go:615] Images already preloaded, skipping extraction
	I0216 10:10:36.767872   20742 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 10:10:36.786097   20742 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 10:10:36.786116   20742 cache_images.go:84] Images are preloaded, skipping loading
	I0216 10:10:36.786207   20742 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 10:10:36.833871   20742 cni.go:84] Creating CNI manager for ""
	I0216 10:10:36.833888   20742 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:10:36.833930   20742 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0216 10:10:36.833945   20742 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-047000 NodeName:newest-cni-047000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 10:10:36.834122   20742 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-047000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 10:10:36.834237   20742 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-047000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 10:10:36.834298   20742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0216 10:10:36.849414   20742 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 10:10:36.849519   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 10:10:36.865234   20742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0216 10:10:36.894486   20742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0216 10:10:36.923645   20742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0216 10:10:36.954114   20742 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 10:10:36.959882   20742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 10:10:36.977272   20742 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000 for IP: 192.168.67.2
	I0216 10:10:36.977295   20742 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:10:36.977455   20742 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
	I0216 10:10:36.977503   20742 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
	I0216 10:10:36.977586   20742 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/client.key
	I0216 10:10:36.977646   20742 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/apiserver.key.c7fa3a9e
	I0216 10:10:36.977702   20742 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/proxy-client.key
	I0216 10:10:36.977894   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
	W0216 10:10:36.977927   20742 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
	I0216 10:10:36.977937   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 10:10:36.977972   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
	I0216 10:10:36.978010   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
	I0216 10:10:36.978041   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
	I0216 10:10:36.978109   20742 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
	I0216 10:10:36.978645   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 10:10:37.019405   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0216 10:10:37.061065   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 10:10:37.101667   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/newest-cni-047000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 10:10:37.142888   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 10:10:37.184503   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0216 10:10:37.226252   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 10:10:37.269745   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0216 10:10:37.312998   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
	I0216 10:10:37.357280   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 10:10:37.399699   20742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
	I0216 10:10:37.440164   20742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 10:10:37.469935   20742 ssh_runner.go:195] Run: openssl version
	I0216 10:10:37.476235   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
	I0216 10:10:37.492284   20742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
	I0216 10:10:37.496940   20742 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
	I0216 10:10:37.496991   20742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
	I0216 10:10:37.503657   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 10:10:37.519206   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 10:10:37.535387   20742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:10:37.539755   20742 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:10:37.539799   20742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 10:10:37.547230   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 10:10:37.562758   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
	I0216 10:10:37.578287   20742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
	I0216 10:10:37.583106   20742 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
	I0216 10:10:37.583155   20742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
	I0216 10:10:37.590058   20742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
	I0216 10:10:37.605106   20742 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 10:10:37.609913   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 10:10:37.616434   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 10:10:37.623147   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 10:10:37.629447   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 10:10:37.635646   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 10:10:37.643126   20742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 10:10:37.649821   20742 kubeadm.go:404] StartCluster: {Name:newest-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-047000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 10:10:37.649947   20742 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 10:10:37.666479   20742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 10:10:37.681552   20742 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 10:10:37.681570   20742 kubeadm.go:636] restartCluster start
	I0216 10:10:37.681630   20742 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 10:10:37.697030   20742 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:37.697116   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:37.750496   20742 kubeconfig.go:135] verify returned: extract IP: "newest-cni-047000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:10:37.750666   20742 kubeconfig.go:146] "newest-cni-047000" context is missing from /Users/jenkins/minikube-integration/17936-1021/kubeconfig - will repair!
	I0216 10:10:37.751024   20742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:10:37.752515   20742 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 10:10:37.768501   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:37.768601   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:37.785342   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:38.269256   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:38.269374   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:38.286569   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:38.768944   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:38.769027   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:38.787347   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:39.270588   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:39.270682   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:39.287700   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:39.769038   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:39.769176   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:39.786454   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:40.268628   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:40.268713   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:40.289510   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:40.768970   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:40.769105   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:40.787285   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:41.268690   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:41.268776   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:41.287032   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:41.770480   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:41.770606   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:41.788439   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:42.268710   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:42.268789   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:42.292007   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:42.770891   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:42.770987   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:42.787970   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:43.268870   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:43.268974   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:43.286453   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:43.768719   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:43.768831   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:43.786642   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:44.269452   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:44.269533   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:44.286439   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:44.769204   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:44.769309   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:44.787487   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:45.269545   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:45.269705   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:45.287665   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:45.768899   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:45.769011   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:45.786745   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:46.270036   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:46.270173   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:46.287453   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:46.768742   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:46.768857   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:46.786580   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:47.269044   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:47.269125   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:47.287961   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:47.770730   20742 api_server.go:166] Checking apiserver status ...
	I0216 10:10:47.770836   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 10:10:47.788954   20742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:47.788970   20742 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 10:10:47.788989   20742 kubeadm.go:1135] stopping kube-system containers ...
	I0216 10:10:47.789061   20742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 10:10:47.806795   20742 docker.go:483] Stopping containers: [026a11733fc5 793d67f604ad 10269b3a2cf1 daeaad8bae5f 362aae737356 193925868b6b 25a3415aace1 0966922ca714 c894c1777d01 0fd849288c07 13b0c2fa8808 8de901032d51 d8b3040650db 1bdcb5818c94 666d3a8d1315]
	I0216 10:10:47.806880   20742 ssh_runner.go:195] Run: docker stop 026a11733fc5 793d67f604ad 10269b3a2cf1 daeaad8bae5f 362aae737356 193925868b6b 25a3415aace1 0966922ca714 c894c1777d01 0fd849288c07 13b0c2fa8808 8de901032d51 d8b3040650db 1bdcb5818c94 666d3a8d1315
	I0216 10:10:47.827992   20742 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 10:10:47.845564   20742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 10:10:47.860621   20742 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 16 18:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 16 18:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 16 18:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 16 18:09 /etc/kubernetes/scheduler.conf
	
	I0216 10:10:47.860686   20742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 10:10:47.875674   20742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 10:10:47.890415   20742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 10:10:47.905217   20742 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:47.905325   20742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 10:10:47.919830   20742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 10:10:47.934760   20742 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 10:10:47.934816   20742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 10:10:47.950346   20742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 10:10:47.965555   20742 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 10:10:47.965606   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:48.023764   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:48.739787   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:48.902927   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:48.966603   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:49.106037   20742 api_server.go:52] waiting for apiserver process to appear ...
	I0216 10:10:49.106113   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:10:49.606591   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:10:50.106494   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:10:50.135671   20742 api_server.go:72] duration metric: took 1.02958694s to wait for apiserver process to appear ...
	I0216 10:10:50.135687   20742 api_server.go:88] waiting for apiserver healthz status ...
	I0216 10:10:50.135740   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:50.137433   20742 api_server.go:269] stopped: https://127.0.0.1:55147/healthz: Get "https://127.0.0.1:55147/healthz": EOF
	I0216 10:10:50.635896   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:52.844455   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 10:10:52.844475   20742 api_server.go:103] status: https://127.0.0.1:55147/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 10:10:52.844508   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:52.915898   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 10:10:52.915921   20742 api_server.go:103] status: https://127.0.0.1:55147/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 10:10:53.136985   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:53.142756   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 10:10:53.142775   20742 api_server.go:103] status: https://127.0.0.1:55147/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 10:10:53.637024   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:53.642206   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 10:10:53.642231   20742 api_server.go:103] status: https://127.0.0.1:55147/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 10:10:54.136009   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:54.142255   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 200:
	ok
	I0216 10:10:54.150717   20742 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 10:10:54.150734   20742 api_server.go:131] duration metric: took 4.014963609s to wait for apiserver health ...
	I0216 10:10:54.150742   20742 cni.go:84] Creating CNI manager for ""
	I0216 10:10:54.150753   20742 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 10:10:54.172638   20742 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 10:10:54.194382   20742 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 10:10:54.210566   20742 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 10:10:54.241867   20742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 10:10:54.250402   20742 system_pods.go:59] 8 kube-system pods found
	I0216 10:10:54.250421   20742 system_pods.go:61] "coredns-76f75df574-t585d" [1cb575d8-6478-4010-83ab-f69210418c0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 10:10:54.250427   20742 system_pods.go:61] "etcd-newest-cni-047000" [efdeb308-af50-4e58-bee4-37487ec89d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 10:10:54.250434   20742 system_pods.go:61] "kube-apiserver-newest-cni-047000" [e1b83202-2693-400b-b21c-fa4c147adf24] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 10:10:54.250441   20742 system_pods.go:61] "kube-controller-manager-newest-cni-047000" [9963825f-3f5d-4300-89d9-818d2bc58405] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 10:10:54.250447   20742 system_pods.go:61] "kube-proxy-zn85p" [afe622a8-6315-4432-a635-a72dab6245e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 10:10:54.250452   20742 system_pods.go:61] "kube-scheduler-newest-cni-047000" [77cdee07-fb2c-4ef3-9b88-dfc42595f2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 10:10:54.250459   20742 system_pods.go:61] "metrics-server-57f55c9bc5-dgg2z" [78f47b69-5a6a-426c-83d0-303742a1192c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 10:10:54.250484   20742 system_pods.go:61] "storage-provisioner" [25441938-f769-4537-8fce-bb61194f2fe4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 10:10:54.250489   20742 system_pods.go:74] duration metric: took 8.610269ms to wait for pod list to return data ...
	I0216 10:10:54.250497   20742 node_conditions.go:102] verifying NodePressure condition ...
	I0216 10:10:54.254192   20742 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 10:10:54.254210   20742 node_conditions.go:123] node cpu capacity is 12
	I0216 10:10:54.254226   20742 node_conditions.go:105] duration metric: took 3.724582ms to run NodePressure ...
	I0216 10:10:54.254249   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 10:10:54.529796   20742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 10:10:54.612696   20742 ops.go:34] apiserver oom_adj: -16
	I0216 10:10:54.612719   20742 kubeadm.go:640] restartCluster took 16.930807746s
	I0216 10:10:54.612731   20742 kubeadm.go:406] StartCluster complete in 16.9625841s
	I0216 10:10:54.612758   20742 settings.go:142] acquiring lock: {Name:mk797212e07e7fce370dcd397d90efd277229019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:10:54.612861   20742 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 10:10:54.613836   20742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/kubeconfig: {Name:mkc64745a91dd32fe2631c66fb95eca6401b716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 10:10:54.614200   20742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 10:10:54.614222   20742 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 10:10:54.614301   20742 addons.go:69] Setting default-storageclass=true in profile "newest-cni-047000"
	I0216 10:10:54.614311   20742 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-047000"
	I0216 10:10:54.614328   20742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-047000"
	I0216 10:10:54.614334   20742 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-047000"
	W0216 10:10:54.614343   20742 addons.go:243] addon storage-provisioner should already be in state true
	I0216 10:10:54.614395   20742 host.go:66] Checking if "newest-cni-047000" exists ...
	I0216 10:10:54.614402   20742 addons.go:69] Setting metrics-server=true in profile "newest-cni-047000"
	I0216 10:10:54.614445   20742 addons.go:234] Setting addon metrics-server=true in "newest-cni-047000"
	W0216 10:10:54.614472   20742 addons.go:243] addon metrics-server should already be in state true
	I0216 10:10:54.614481   20742 config.go:182] Loaded profile config "newest-cni-047000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 10:10:54.614511   20742 addons.go:69] Setting dashboard=true in profile "newest-cni-047000"
	I0216 10:10:54.614540   20742 addons.go:234] Setting addon dashboard=true in "newest-cni-047000"
	I0216 10:10:54.614548   20742 host.go:66] Checking if "newest-cni-047000" exists ...
	W0216 10:10:54.614550   20742 addons.go:243] addon dashboard should already be in state true
	I0216 10:10:54.614614   20742 host.go:66] Checking if "newest-cni-047000" exists ...
	I0216 10:10:54.614766   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:54.614988   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:54.616286   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:54.616281   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:54.626182   20742 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-047000" context rescaled to 1 replicas
	I0216 10:10:54.626253   20742 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 10:10:54.649641   20742 out.go:177] * Verifying Kubernetes components...
	I0216 10:10:54.690679   20742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 10:10:54.741799   20742 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 10:10:54.705575   20742 addons.go:234] Setting addon default-storageclass=true in "newest-cni-047000"
	I0216 10:10:54.784550   20742 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 10:10:54.822711   20742 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0216 10:10:54.822743   20742 addons.go:243] addon default-storageclass should already be in state true
	I0216 10:10:54.860798   20742 host.go:66] Checking if "newest-cni-047000" exists ...
	I0216 10:10:54.903816   20742 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 10:10:54.860810   20742 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 10:10:54.882672   20742 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 10:10:54.883277   20742 cli_runner.go:164] Run: docker container inspect newest-cni-047000 --format={{.State.Status}}
	I0216 10:10:54.903844   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 10:10:54.903942   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 10:10:54.903963   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:54.925825   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 10:10:54.925856   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 10:10:54.925903   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:54.925962   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:54.940690   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:54.940728   20742 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0216 10:10:54.987455   20742 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 10:10:54.987482   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 10:10:54.987571   20742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-047000
	I0216 10:10:55.019808   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:55.025207   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:55.025232   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:55.037817   20742 api_server.go:52] waiting for apiserver process to appear ...
	I0216 10:10:55.037894   20742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 10:10:55.068155   20742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/newest-cni-047000/id_rsa Username:docker}
	I0216 10:10:55.208718   20742 api_server.go:72] duration metric: took 582.412606ms to wait for apiserver process to appear ...
	I0216 10:10:55.208738   20742 api_server.go:88] waiting for apiserver healthz status ...
	I0216 10:10:55.208769   20742 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55147/healthz ...
	I0216 10:10:55.218849   20742 api_server.go:279] https://127.0.0.1:55147/healthz returned 200:
	ok
	I0216 10:10:55.221055   20742 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 10:10:55.221074   20742 api_server.go:131] duration metric: took 12.329277ms to wait for apiserver health ...
	I0216 10:10:55.221082   20742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 10:10:55.230315   20742 system_pods.go:59] 8 kube-system pods found
	I0216 10:10:55.230335   20742 system_pods.go:61] "coredns-76f75df574-t585d" [1cb575d8-6478-4010-83ab-f69210418c0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 10:10:55.230364   20742 system_pods.go:61] "etcd-newest-cni-047000" [efdeb308-af50-4e58-bee4-37487ec89d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 10:10:55.230373   20742 system_pods.go:61] "kube-apiserver-newest-cni-047000" [e1b83202-2693-400b-b21c-fa4c147adf24] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 10:10:55.230380   20742 system_pods.go:61] "kube-controller-manager-newest-cni-047000" [9963825f-3f5d-4300-89d9-818d2bc58405] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 10:10:55.230386   20742 system_pods.go:61] "kube-proxy-zn85p" [afe622a8-6315-4432-a635-a72dab6245e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 10:10:55.230393   20742 system_pods.go:61] "kube-scheduler-newest-cni-047000" [77cdee07-fb2c-4ef3-9b88-dfc42595f2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 10:10:55.230402   20742 system_pods.go:61] "metrics-server-57f55c9bc5-dgg2z" [78f47b69-5a6a-426c-83d0-303742a1192c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 10:10:55.230409   20742 system_pods.go:61] "storage-provisioner" [25441938-f769-4537-8fce-bb61194f2fe4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 10:10:55.230417   20742 system_pods.go:74] duration metric: took 9.330448ms to wait for pod list to return data ...
	I0216 10:10:55.230426   20742 default_sa.go:34] waiting for default service account to be created ...
	I0216 10:10:55.303358   20742 default_sa.go:45] found service account: "default"
	I0216 10:10:55.303382   20742 default_sa.go:55] duration metric: took 72.946057ms for default service account to be created ...
	I0216 10:10:55.303392   20742 kubeadm.go:581] duration metric: took 677.094153ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0216 10:10:55.303407   20742 node_conditions.go:102] verifying NodePressure condition ...
	I0216 10:10:55.308624   20742 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0216 10:10:55.308643   20742 node_conditions.go:123] node cpu capacity is 12
	I0216 10:10:55.308654   20742 node_conditions.go:105] duration metric: took 5.242521ms to run NodePressure ...
	I0216 10:10:55.308666   20742 start.go:228] waiting for startup goroutines ...
	I0216 10:10:55.422293   20742 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 10:10:55.422310   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 10:10:55.442959   20742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 10:10:55.507476   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 10:10:55.507499   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 10:10:55.511631   20742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 10:10:55.531738   20742 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 10:10:55.531756   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 10:10:55.610648   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 10:10:55.610667   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 10:10:55.709916   20742 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 10:10:55.709942   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 10:10:55.723125   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 10:10:55.723146   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 10:10:55.822431   20742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 10:10:55.827740   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 10:10:55.827754   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 10:10:55.929675   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 10:10:55.929691   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 10:10:56.040730   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 10:10:56.040763   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 10:10:56.137249   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 10:10:56.137268   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 10:10:56.235077   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 10:10:56.235096   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 10:10:56.316976   20742 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 10:10:56.316992   20742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 10:10:56.353026   20742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 10:10:56.933457   20742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.421770958s)
	I0216 10:10:57.105250   20742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.282747874s)
	I0216 10:10:57.105296   20742 addons.go:470] Verifying addon metrics-server=true in "newest-cni-047000"
	I0216 10:10:57.360886   20742 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-047000 addons enable metrics-server
	
	I0216 10:10:57.433661   20742 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0216 10:10:57.492754   20742 addons.go:505] enable addons completed in 2.878486009s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0216 10:10:57.492783   20742 start.go:233] waiting for cluster config update ...
	I0216 10:10:57.492803   20742 start.go:242] writing updated cluster config ...
	I0216 10:10:57.493256   20742 ssh_runner.go:195] Run: rm -f paused
	I0216 10:10:57.539724   20742 start.go:601] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0216 10:10:57.560555   20742 out.go:177] * Done! kubectl is now configured to use "newest-cni-047000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.343496291Z" level=info msg="Loading containers: start."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.435583152Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.472401887Z" level=info msg="Loading containers: done."
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480552174Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.480629800Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.499819622Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:51:56 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:51:56.500020070Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:51:56 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.136295167Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137126387Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[737]: time="2024-02-16T17:52:05.137685126Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:52:05 old-k8s-version-356000 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.196506381Z" level=info msg="Starting up"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.719869752Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:52:05 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:05.932365156Z" level=info msg="Loading containers: start."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.053011191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.091165695Z" level=info msg="Loading containers: done."
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099557844Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.099621366Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119782992Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:52:06 old-k8s-version-356000 dockerd[974]: time="2024-02-16T17:52:06.119947117Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:52:06 old-k8s-version-356000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-16T18:15:54Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 18:15:54 up  1:35,  0 users,  load average: 2.99, 3.68, 4.43
	Linux old-k8s-version-356000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 18:15:52 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1327.
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: I0216 18:15:53.360815   39645 server.go:410] Version: v1.16.0
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: I0216 18:15:53.361009   39645 plugins.go:100] No cloud provider specified.
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: I0216 18:15:53.361032   39645 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: I0216 18:15:53.363050   39645 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: W0216 18:15:53.363808   39645 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: W0216 18:15:53.363867   39645 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:15:53 old-k8s-version-356000 kubelet[39645]: F0216 18:15:53.363887   39645 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1328.
	Feb 16 18:15:53 old-k8s-version-356000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 16 18:15:54 old-k8s-version-356000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: I0216 18:15:54.124832   39744 server.go:410] Version: v1.16.0
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: I0216 18:15:54.125027   39744 plugins.go:100] No cloud provider specified.
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: I0216 18:15:54.125036   39744 server.go:773] Client rotation is on, will bootstrap in background
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: I0216 18:15:54.126669   39744 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: W0216 18:15:54.127275   39744 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: W0216 18:15:54.127334   39744 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 16 18:15:54 old-k8s-version-356000 kubelet[39744]: F0216 18:15:54.127355   39744 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 16 18:15:54 old-k8s-version-356000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 16 18:15:54 old-k8s-version-356000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 2 (402.447195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-356000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (392.79s)

                                                
                                    

Test pass (300/333)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.83
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
9 TestDownloadOnly/v1.16.0/DeleteAll 0.63
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.28.4/json-events 20.29
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.29
18 TestDownloadOnly/v1.28.4/DeleteAll 0.65
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.37
21 TestDownloadOnly/v1.29.0-rc.2/json-events 18.05
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.31
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.65
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 2.01
30 TestBinaryMirror 1.61
31 TestOffline 42.68
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 335.99
40 TestAddons/parallel/InspektorGadget 11.92
41 TestAddons/parallel/MetricsServer 6.82
42 TestAddons/parallel/HelmTiller 10.8
44 TestAddons/parallel/CSI 62.09
45 TestAddons/parallel/Headlamp 13.62
46 TestAddons/parallel/CloudSpanner 6.69
47 TestAddons/parallel/LocalPath 55.57
48 TestAddons/parallel/NvidiaDevicePlugin 5.76
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 11.82
54 TestCertOptions 25.16
55 TestCertExpiration 233.5
56 TestDockerFlags 27.11
57 TestForceSystemdFlag 26.79
58 TestForceSystemdEnv 29.1
61 TestHyperKitDriverInstallOrUpdate 8.81
64 TestErrorSpam/setup 23.68
65 TestErrorSpam/start 2.11
66 TestErrorSpam/status 1.3
67 TestErrorSpam/pause 1.78
68 TestErrorSpam/unpause 1.85
69 TestErrorSpam/stop 11.43
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 39.59
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 40.03
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 10.54
81 TestFunctional/serial/CacheCmd/cache/add_local 1.89
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.43
86 TestFunctional/serial/CacheCmd/cache/delete 0.17
87 TestFunctional/serial/MinikubeKubectlCmd 1.3
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.67
89 TestFunctional/serial/ExtraConfig 42.95
90 TestFunctional/serial/ComponentHealth 0.08
91 TestFunctional/serial/LogsCmd 3.26
92 TestFunctional/serial/LogsFileCmd 3.4
93 TestFunctional/serial/InvalidService 4.15
95 TestFunctional/parallel/ConfigCmd 0.53
96 TestFunctional/parallel/DashboardCmd 13.15
97 TestFunctional/parallel/DryRun 1.39
98 TestFunctional/parallel/InternationalLanguage 0.67
99 TestFunctional/parallel/StatusCmd 1.36
104 TestFunctional/parallel/AddonsCmd 2.07
105 TestFunctional/parallel/PersistentVolumeClaim 45.87
107 TestFunctional/parallel/SSHCmd 0.77
108 TestFunctional/parallel/CpCmd 2.36
109 TestFunctional/parallel/MySQL 118.06
110 TestFunctional/parallel/FileSync 0.47
111 TestFunctional/parallel/CertSync 2.88
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 1.55
120 TestFunctional/parallel/Version/short 0.1
121 TestFunctional/parallel/Version/components 0.75
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.18
127 TestFunctional/parallel/ImageCommands/Setup 5.72
128 TestFunctional/parallel/DockerEnv/bash 1.94
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.99
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.22
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.2
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.08
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 55.16
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.24
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
150 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
152 TestFunctional/parallel/ProfileCmd/profile_list 0.5
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
154 TestFunctional/parallel/ServiceCmd/List 1.18
155 TestFunctional/parallel/MountCmd/any-port 11.74
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.16
157 TestFunctional/parallel/ServiceCmd/HTTPS 15
158 TestFunctional/parallel/MountCmd/specific-port 2.23
159 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
160 TestFunctional/parallel/ServiceCmd/Format 15.01
161 TestFunctional/parallel/ServiceCmd/URL 15
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestImageBuild/serial/Setup 22.68
169 TestImageBuild/serial/NormalBuild 4.29
170 TestImageBuild/serial/BuildWithBuildArg 1.29
171 TestImageBuild/serial/BuildWithDockerIgnore 1.06
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.09
182 TestJSONOutput/start/Command 39.96
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.62
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.77
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.77
207 TestKicCustomNetwork/create_custom_network 25.08
208 TestKicCustomNetwork/use_default_bridge_network 25.29
209 TestKicExistingNetwork 24.96
210 TestKicCustomSubnet 25.74
211 TestKicStaticIP 24.98
212 TestMainNoArgs 0.08
213 TestMinikubeProfile 52.5
216 TestMountStart/serial/StartWithMountFirst 8.09
217 TestMountStart/serial/VerifyMountFirst 0.38
218 TestMountStart/serial/StartWithMountSecond 8.08
219 TestMountStart/serial/VerifyMountSecond 0.39
220 TestMountStart/serial/DeleteFirst 2.07
221 TestMountStart/serial/VerifyMountPostDelete 0.39
222 TestMountStart/serial/Stop 1.55
223 TestMountStart/serial/RestartStopped 8.95
224 TestMountStart/serial/VerifyMountPostStop 0.39
227 TestMultiNode/serial/FreshStart2Nodes 65.51
228 TestMultiNode/serial/DeployApp2Nodes 42.42
229 TestMultiNode/serial/PingHostFrom2Pods 0.94
230 TestMultiNode/serial/AddNode 16.29
231 TestMultiNode/serial/MultiNodeLabels 0.09
232 TestMultiNode/serial/ProfileList 0.48
233 TestMultiNode/serial/CopyFile 14.57
234 TestMultiNode/serial/StopNode 3.01
235 TestMultiNode/serial/StartAfterStop 13.45
236 TestMultiNode/serial/RestartKeepsNodes 100.54
237 TestMultiNode/serial/DeleteNode 5.99
238 TestMultiNode/serial/StopMultiNode 21.9
239 TestMultiNode/serial/RestartMultiNode 63.93
240 TestMultiNode/serial/ValidateNameConflict 25.93
244 TestPreload 176.81
246 TestScheduledStopUnix 95.71
249 TestInsufficientStorage 10.74
250 TestRunningBinaryUpgrade 192.05
253 TestMissingContainerUpgrade 109.01
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 21.02
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 22.9
267 TestStoppedBinaryUpgrade/Setup 4.58
268 TestStoppedBinaryUpgrade/Upgrade 75.72
269 TestStoppedBinaryUpgrade/MinikubeLogs 3.2
271 TestPause/serial/Start 74.81
272 TestPause/serial/SecondStartNoReconfiguration 40.77
273 TestPause/serial/Pause 0.65
274 TestPause/serial/VerifyStatus 0.41
275 TestPause/serial/Unpause 0.67
276 TestPause/serial/PauseAgain 0.79
277 TestPause/serial/DeletePaused 2.47
278 TestPause/serial/VerifyDeletedResources 16.06
287 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
288 TestNoKubernetes/serial/StartWithK8s 23.29
289 TestNoKubernetes/serial/StartWithStopK8s 8.65
290 TestNoKubernetes/serial/Start 7.28
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
292 TestNoKubernetes/serial/ProfileList 1.34
293 TestNoKubernetes/serial/Stop 1.56
294 TestNoKubernetes/serial/StartNoArgs 7.99
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
296 TestNetworkPlugins/group/auto/Start 48.3
297 TestNetworkPlugins/group/auto/KubeletFlags 0.39
298 TestNetworkPlugins/group/auto/NetCatPod 13.17
299 TestNetworkPlugins/group/auto/DNS 0.15
300 TestNetworkPlugins/group/auto/Localhost 0.12
301 TestNetworkPlugins/group/auto/HairPin 0.12
302 TestNetworkPlugins/group/calico/Start 66.83
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.5
305 TestNetworkPlugins/group/calico/NetCatPod 13.28
306 TestNetworkPlugins/group/calico/DNS 0.14
307 TestNetworkPlugins/group/calico/Localhost 0.12
308 TestNetworkPlugins/group/calico/HairPin 0.13
309 TestNetworkPlugins/group/custom-flannel/Start 53.51
310 TestNetworkPlugins/group/false/Start 39.64
311 TestNetworkPlugins/group/false/KubeletFlags 0.45
312 TestNetworkPlugins/group/false/NetCatPod 13.25
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.2
315 TestNetworkPlugins/group/false/DNS 0.14
316 TestNetworkPlugins/group/false/Localhost 0.12
317 TestNetworkPlugins/group/false/HairPin 0.11
318 TestNetworkPlugins/group/custom-flannel/DNS 0.15
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/kindnet/Start 51.34
322 TestNetworkPlugins/group/flannel/Start 51.62
323 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
324 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
325 TestNetworkPlugins/group/kindnet/NetCatPod 13.24
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
328 TestNetworkPlugins/group/kindnet/DNS 0.16
329 TestNetworkPlugins/group/kindnet/Localhost 0.13
330 TestNetworkPlugins/group/kindnet/HairPin 0.13
331 TestNetworkPlugins/group/flannel/NetCatPod 13.19
332 TestNetworkPlugins/group/flannel/DNS 0.15
333 TestNetworkPlugins/group/flannel/Localhost 0.14
334 TestNetworkPlugins/group/flannel/HairPin 0.14
335 TestNetworkPlugins/group/enable-default-cni/Start 38.79
336 TestNetworkPlugins/group/bridge/Start 38.17
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.22
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
343 TestNetworkPlugins/group/bridge/NetCatPod 15.2
344 TestNetworkPlugins/group/bridge/DNS 0.16
345 TestNetworkPlugins/group/bridge/Localhost 0.14
346 TestNetworkPlugins/group/bridge/HairPin 0.15
347 TestNetworkPlugins/group/kubenet/Start 40.31
350 TestNetworkPlugins/group/kubenet/KubeletFlags 0.44
351 TestNetworkPlugins/group/kubenet/NetCatPod 13.22
352 TestNetworkPlugins/group/kubenet/DNS 0.13
353 TestNetworkPlugins/group/kubenet/Localhost 0.13
354 TestNetworkPlugins/group/kubenet/HairPin 0.12
356 TestStartStop/group/no-preload/serial/FirstStart 153.52
357 TestStartStop/group/no-preload/serial/DeployApp 14.27
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
359 TestStartStop/group/no-preload/serial/Stop 10.86
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
361 TestStartStop/group/no-preload/serial/SecondStart 337.15
364 TestStartStop/group/old-k8s-version/serial/Stop 1.56
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.44
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 18.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
370 TestStartStop/group/no-preload/serial/Pause 3.24
372 TestStartStop/group/embed-certs/serial/FirstStart 37.5
373 TestStartStop/group/embed-certs/serial/DeployApp 12.26
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
375 TestStartStop/group/embed-certs/serial/Stop 11
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.45
377 TestStartStop/group/embed-certs/serial/SecondStart 314.39
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
382 TestStartStop/group/embed-certs/serial/Pause 3.32
384 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.47
385 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.27
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.44
387 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.88
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.43
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 332.91
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
392 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.25
396 TestStartStop/group/newest-cni/serial/FirstStart 35.01
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
399 TestStartStop/group/newest-cni/serial/Stop 10.89
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.47
401 TestStartStop/group/newest-cni/serial/SecondStart 29.67
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
405 TestStartStop/group/newest-cni/serial/Pause 3.26
x
+
TestDownloadOnly/v1.16.0/json-events (21.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-990000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-990000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (21.834369484s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-990000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-990000: exit status 85 (293.939207ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-990000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST |          |
	|         | -p download-only-990000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 08:41:16
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 08:41:16.355969    2153 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:41:16.356200    2153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:41:16.356206    2153 out.go:304] Setting ErrFile to fd 2...
	I0216 08:41:16.356211    2153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:41:16.356390    2153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	W0216 08:41:16.356594    2153 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17936-1021/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17936-1021/.minikube/config/config.json: no such file or directory
	I0216 08:41:16.358533    2153 out.go:298] Setting JSON to true
	I0216 08:41:16.385625    2153 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":647,"bootTime":1708101029,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:41:16.385755    2153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:41:16.407810    2153 out.go:97] [download-only-990000] minikube v1.32.0 on Darwin 14.3.1
	I0216 08:41:16.428797    2153 out.go:169] MINIKUBE_LOCATION=17936
	W0216 08:41:16.407944    2153 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball: no such file or directory
	I0216 08:41:16.407960    2153 notify.go:220] Checking for updates...
	I0216 08:41:16.471610    2153 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:41:16.494755    2153 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:41:16.516572    2153 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:41:16.537749    2153 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	W0216 08:41:16.580542    2153 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 08:41:16.580943    2153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:41:16.637420    2153 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:41:16.637562    2153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:41:16.749200    2153 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-16 16:41:16.733505305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:41:16.770527    2153 out.go:97] Using the docker driver based on user configuration
	I0216 08:41:16.770572    2153 start.go:299] selected driver: docker
	I0216 08:41:16.770580    2153 start.go:903] validating driver "docker" against <nil>
	I0216 08:41:16.770792    2153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:41:16.880394    2153 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-16 16:41:16.865304502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:41:16.880581    2153 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 08:41:16.884995    2153 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0216 08:41:16.885458    2153 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 08:41:16.906821    2153 out.go:169] Using Docker Desktop driver with root privileges
	I0216 08:41:16.928655    2153 cni.go:84] Creating CNI manager for ""
	I0216 08:41:16.928706    2153 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 08:41:16.928724    2153 start_flags.go:323] config:
	{Name:download-only-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-990000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:41:16.950533    2153 out.go:97] Starting control plane node download-only-990000 in cluster download-only-990000
	I0216 08:41:16.950566    2153 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 08:41:16.971580    2153 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 08:41:16.971661    2153 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 08:41:16.971783    2153 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 08:41:17.021732    2153 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 08:41:17.021952    2153 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 08:41:17.022091    2153 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 08:41:17.377230    2153 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 08:41:17.377253    2153 cache.go:56] Caching tarball of preloaded images
	I0216 08:41:17.377509    2153 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 08:41:17.399047    2153 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0216 08:41:17.399061    2153 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:17.968678    2153 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0216 08:41:33.435898    2153 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:33.436138    2153 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:33.982400    2153 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 08:41:33.982619    2153 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/download-only-990000/config.json ...
	I0216 08:41:33.982643    2153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/download-only-990000/config.json: {Name:mk7c2f4c628670d794a93faf89b6d46e63b576c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:41:33.982951    2153 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 08:41:33.983225    2153 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-990000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-990000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (20.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-971000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-971000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (20.2907754s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (20.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-971000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-971000: exit status 85 (289.187248ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-990000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST |                     |
	|         | -p download-only-990000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Feb 24 08:41 PST | 16 Feb 24 08:41 PST |
	| delete  | -p download-only-990000        | download-only-990000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST | 16 Feb 24 08:41 PST |
	| start   | -o=json --download-only        | download-only-971000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST |                     |
	|         | -p download-only-971000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 08:41:39
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 08:41:39.486633    2228 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:41:39.487615    2228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:41:39.487623    2228 out.go:304] Setting ErrFile to fd 2...
	I0216 08:41:39.487631    2228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:41:39.488209    2228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 08:41:39.489744    2228 out.go:298] Setting JSON to true
	I0216 08:41:39.511686    2228 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":670,"bootTime":1708101029,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:41:39.511801    2228 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:41:39.533315    2228 out.go:97] [download-only-971000] minikube v1.32.0 on Darwin 14.3.1
	I0216 08:41:39.554564    2228 out.go:169] MINIKUBE_LOCATION=17936
	I0216 08:41:39.533513    2228 notify.go:220] Checking for updates...
	I0216 08:41:39.599274    2228 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:41:39.620690    2228 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:41:39.642768    2228 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:41:39.664390    2228 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	W0216 08:41:39.707380    2228 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 08:41:39.707934    2228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:41:39.766343    2228 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:41:39.766477    2228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:41:39.874457    2228 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-16 16:41:39.859664939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:41:39.895780    2228 out.go:97] Using the docker driver based on user configuration
	I0216 08:41:39.895824    2228 start.go:299] selected driver: docker
	I0216 08:41:39.895834    2228 start.go:903] validating driver "docker" against <nil>
	I0216 08:41:39.896044    2228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:41:39.999455    2228 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-16 16:41:39.989660498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:41:39.999649    2228 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 08:41:40.002567    2228 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0216 08:41:40.002768    2228 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 08:41:40.023765    2228 out.go:169] Using Docker Desktop driver with root privileges
	I0216 08:41:40.044999    2228 cni.go:84] Creating CNI manager for ""
	I0216 08:41:40.045043    2228 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 08:41:40.045063    2228 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 08:41:40.045082    2228 start_flags.go:323] config:
	{Name:download-only-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:41:40.066740    2228 out.go:97] Starting control plane node download-only-971000 in cluster download-only-971000
	I0216 08:41:40.066799    2228 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 08:41:40.088844    2228 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 08:41:40.088910    2228 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 08:41:40.088995    2228 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 08:41:40.139090    2228 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 08:41:40.139290    2228 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 08:41:40.139315    2228 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 08:41:40.139322    2228 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 08:41:40.139337    2228 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 08:41:40.351107    2228 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 08:41:40.351130    2228 cache.go:56] Caching tarball of preloaded images
	I0216 08:41:40.351391    2228 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 08:41:40.373282    2228 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0216 08:41:40.373332    2228 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:40.914507    2228 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0216 08:41:58.172510    2228 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:58.172744    2228 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:41:58.796365    2228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 08:41:58.796640    2228 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/download-only-971000/config.json ...
	I0216 08:41:58.796665    2228 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/download-only-971000/config.json: {Name:mk94566be38e9f4dfcfeac4389419fce2bef8348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 08:41:58.796955    2228 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 08:41:58.797157    2228 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-971000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-971000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-644000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-644000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (18.049019099s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-644000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-644000: exit status 85 (308.989117ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-990000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST |                     |
	|         | -p download-only-990000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 08:41 PST | 16 Feb 24 08:41 PST |
	| delete  | -p download-only-990000           | download-only-990000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST | 16 Feb 24 08:41 PST |
	| start   | -o=json --download-only           | download-only-971000 | jenkins | v1.32.0 | 16 Feb 24 08:41 PST |                     |
	|         | -p download-only-971000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 08:42 PST | 16 Feb 24 08:42 PST |
	| delete  | -p download-only-971000           | download-only-971000 | jenkins | v1.32.0 | 16 Feb 24 08:42 PST | 16 Feb 24 08:42 PST |
	| start   | -o=json --download-only           | download-only-644000 | jenkins | v1.32.0 | 16 Feb 24 08:42 PST |                     |
	|         | -p download-only-644000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 08:42:01
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 08:42:01.097212    2296 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:42:01.097373    2296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:42:01.097379    2296 out.go:304] Setting ErrFile to fd 2...
	I0216 08:42:01.097383    2296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:42:01.097563    2296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 08:42:01.098974    2296 out.go:298] Setting JSON to true
	I0216 08:42:01.121878    2296 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":692,"bootTime":1708101029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:42:01.121974    2296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:42:01.143381    2296 out.go:97] [download-only-644000] minikube v1.32.0 on Darwin 14.3.1
	I0216 08:42:01.164297    2296 out.go:169] MINIKUBE_LOCATION=17936
	I0216 08:42:01.143541    2296 notify.go:220] Checking for updates...
	I0216 08:42:01.208428    2296 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:42:01.229373    2296 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:42:01.250243    2296 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:42:01.271392    2296 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	W0216 08:42:01.314077    2296 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 08:42:01.314568    2296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:42:01.373956    2296 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:42:01.374094    2296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:42:01.475492    2296 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:97 SystemTime:2024-02-16 16:42:01.464889429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:42:01.496159    2296 out.go:97] Using the docker driver based on user configuration
	I0216 08:42:01.496191    2296 start.go:299] selected driver: docker
	I0216 08:42:01.496201    2296 start.go:903] validating driver "docker" against <nil>
	I0216 08:42:01.496371    2296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:42:01.599238    2296 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:97 SystemTime:2024-02-16 16:42:01.589696638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:42:01.599419    2296 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 08:42:01.602263    2296 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0216 08:42:01.602402    2296 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 08:42:01.623217    2296 out.go:169] Using Docker Desktop driver with root privileges
	I0216 08:42:01.644144    2296 cni.go:84] Creating CNI manager for ""
	I0216 08:42:01.644186    2296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 08:42:01.644207    2296 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 08:42:01.644221    2296 start_flags.go:323] config:
	{Name:download-only-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:42:01.666180    2296 out.go:97] Starting control plane node download-only-644000 in cluster download-only-644000
	I0216 08:42:01.666223    2296 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 08:42:01.687444    2296 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 08:42:01.687487    2296 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 08:42:01.687539    2296 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 08:42:01.736209    2296 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 08:42:01.736385    2296 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 08:42:01.736401    2296 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 08:42:01.736407    2296 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 08:42:01.736414    2296 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 08:42:01.948704    2296 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0216 08:42:01.948737    2296 cache.go:56] Caching tarball of preloaded images
	I0216 08:42:01.948941    2296 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 08:42:01.970739    2296 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0216 08:42:01.970778    2296 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0216 08:42:02.507994    2296 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-644000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-644000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-307000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-307000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-307000
--- PASS: TestDownloadOnlyKic (2.01s)

                                                
                                    
x
+
TestBinaryMirror (1.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-676000 --alsologtostderr --binary-mirror http://127.0.0.1:49359 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-676000 --alsologtostderr --binary-mirror http://127.0.0.1:49359 --driver=docker : (1.01549104s)
helpers_test.go:175: Cleaning up "binary-mirror-676000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-676000
--- PASS: TestBinaryMirror (1.61s)

                                                
                                    
x
+
TestOffline (42.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-580000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-580000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (40.155310382s)
helpers_test.go:175: Cleaning up "offline-docker-580000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-580000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-580000: (2.520577896s)
--- PASS: TestOffline (42.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-983000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-983000: exit status 85 (190.577377ms)

                                                
                                                
-- stdout --
	* Profile "addons-983000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-983000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-983000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-983000: exit status 85 (210.937662ms)

                                                
                                                
-- stdout --
	* Profile "addons-983000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-983000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (335.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-983000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-983000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m35.988785621s)
--- PASS: TestAddons/Setup (335.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wjw5p" [b4f7c73c-d8fe-49ee-9e64-78b8f218a300] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004910642s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-983000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-983000: (5.91554129s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.975398ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-9kpmz" [1b7dd22e-0636-48a6-b1ec-1b074b77ddd7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004836556s
addons_test.go:415: (dbg) Run:  kubectl --context addons-983000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 44.117451ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-lp552" [ca024826-a095-4069-aed3-1eb790ef51fd] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004162876s
addons_test.go:473: (dbg) Run:  kubectl --context addons-983000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-983000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.010843008s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 20.958898ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-983000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-983000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [df402caa-3094-4f87-9269-b8a87c7fb63f] Pending
helpers_test.go:344: "task-pv-pod" [df402caa-3094-4f87-9269-b8a87c7fb63f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [df402caa-3094-4f87-9269-b8a87c7fb63f] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.006374548s
addons_test.go:584: (dbg) Run:  kubectl --context addons-983000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-983000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-983000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-983000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-983000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-983000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-983000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0519a98-503e-4b33-9caf-de9a677b3d9c] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0519a98-503e-4b33-9caf-de9a677b3d9c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0519a98-503e-4b33-9caf-de9a677b3d9c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.038788867s
addons_test.go:626: (dbg) Run:  kubectl --context addons-983000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-983000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-983000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-983000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.089734169s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-983000 addons disable volumesnapshots --alsologtostderr -v=1: (1.084440342s)
--- PASS: TestAddons/parallel/CSI (62.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-983000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-983000 --alsologtostderr -v=1: (1.614455008s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-9vg6d" [a61ae472-1d09-4af9-b7d8-a6c230021e48] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-9vg6d" [a61ae472-1d09-4af9-b7d8-a6c230021e48] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004947618s
--- PASS: TestAddons/parallel/Headlamp (13.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7b4754d5d4-h759s" [0a04a69d-a741-45ce-9d45-5a9278688044] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0044651s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-983000
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-983000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-983000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f809c011-3497-44a9-bafb-c121bd5cd46b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f809c011-3497-44a9-bafb-c121bd5cd46b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f809c011-3497-44a9-bafb-c121bd5cd46b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003367654s
addons_test.go:891: (dbg) Run:  kubectl --context addons-983000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 ssh "cat /opt/local-path-provisioner/pvc-1dbc81ce-d9b5-471b-a12d-7d7bfab7e58b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-983000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-983000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-983000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.587862655s)
--- PASS: TestAddons/parallel/LocalPath (55.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-blx89" [84928c5a-a9b7-47fd-b3ca-eef1baceaf8a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005654376s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-983000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-2pcnh" [c9668e2a-ee17-4873-ab09-c8e01ce14df2] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004625575s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-983000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-983000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.82s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-983000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-983000: (11.075893431s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-983000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-983000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-983000
--- PASS: TestAddons/StoppedEnableDisable (11.82s)

                                                
                                    
x
+
TestCertOptions (25.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-361000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-361000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (21.901114789s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-361000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-361000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-361000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-361000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-361000: (2.422592733s)
--- PASS: TestCertOptions (25.16s)

                                                
                                    
x
+
TestCertExpiration (233.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-044000 --memory=2048 --cert-expiration=3m --driver=docker 
E0216 09:28:01.396841    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-044000 --memory=2048 --cert-expiration=3m --driver=docker : (23.165673754s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-044000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-044000 --memory=2048 --cert-expiration=8760h --driver=docker : (27.704883666s)
helpers_test.go:175: Cleaning up "cert-expiration-044000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-044000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-044000: (2.631821201s)
--- PASS: TestCertExpiration (233.50s)

                                                
                                    
x
+
TestDockerFlags (27.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.651589075s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-933000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-933000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-933000: (2.524319315s)
--- PASS: TestDockerFlags (27.11s)

                                                
                                    
x
+
TestForceSystemdFlag (26.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-180000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-180000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.674314806s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-180000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-180000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-180000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-180000: (2.594962578s)
--- PASS: TestForceSystemdFlag (26.79s)

                                                
                                    
x
+
TestForceSystemdEnv (29.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-649000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-649000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (25.945601238s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-649000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-649000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-649000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-649000: (2.712539074s)
--- PASS: TestForceSystemdEnv (29.10s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.81s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperKitDriverInstallOrUpdate (8.81s)

                                                
                                    
x
+
TestErrorSpam/setup (23.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-347000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-347000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 --driver=docker : (23.682453075s)
--- PASS: TestErrorSpam/setup (23.68s)

                                                
                                    
x
+
TestErrorSpam/start (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 start --dry-run
--- PASS: TestErrorSpam/start (2.11s)

                                                
                                    
x
+
TestErrorSpam/status (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 status
--- PASS: TestErrorSpam/status (1.30s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (11.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 stop: (10.80279456s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-347000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-347000 stop
--- PASS: TestErrorSpam/stop (11.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/test/nested/copy/2151/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-060000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (39.587612792s)
--- PASS: TestFunctional/serial/StartWithProxy (39.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-060000 --alsologtostderr -v=8: (40.028605241s)
functional_test.go:659: soft start took 40.029161614s for "functional-060000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-060000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:3.1: (4.215147958s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:3.3: (3.629979883s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 cache add registry.k8s.io/pause:latest: (2.696726192s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local915691499/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache add minikube-local-cache-test:functional-060000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 cache add minikube-local-cache-test:functional-060000: (1.09890957s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache delete minikube-local-cache-test:functional-060000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-060000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (420.87172ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 cache reload: (2.143190389s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 kubectl -- --context functional-060000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 kubectl -- --context functional-060000 get pods: (1.296825498s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-060000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-060000 get pods: (1.670318706s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.67s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0216 08:53:01.372294    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.378710    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.388911    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.409036    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.449213    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.529855    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:01.690155    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:02.010622    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:02.650983    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:03.931298    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:06.491712    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:11.611837    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:53:21.852155    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-060000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.950904553s)
functional_test.go:757: restart took 42.9510344s for "functional-060000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-060000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 logs
E0216 08:53:42.332100    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 logs: (3.262479615s)
--- PASS: TestFunctional/serial/LogsCmd (3.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3347288569/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3347288569/001/logs.txt: (3.395411485s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-060000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-060000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-060000: exit status 115 (595.669198ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32163 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-060000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 config get cpus: exit status 14 (59.971904ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 config get cpus: exit status 14 (59.228787ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-060000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-060000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5125: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-060000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (639.233946ms)

                                                
                                                
-- stdout --
	* [functional-060000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 08:56:32.791357    5063 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:56:32.791643    5063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:56:32.791649    5063 out.go:304] Setting ErrFile to fd 2...
	I0216 08:56:32.791655    5063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:56:32.791920    5063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 08:56:32.793523    5063 out.go:298] Setting JSON to false
	I0216 08:56:32.816358    5063 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1563,"bootTime":1708101029,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:56:32.816454    5063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:56:32.838171    5063 out.go:177] * [functional-060000] minikube v1.32.0 on Darwin 14.3.1
	I0216 08:56:32.900924    5063 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 08:56:32.880132    5063 notify.go:220] Checking for updates...
	I0216 08:56:32.921866    5063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:56:32.943133    5063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:56:32.964049    5063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:56:32.984896    5063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 08:56:33.006096    5063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 08:56:33.027922    5063 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 08:56:33.028730    5063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:56:33.086163    5063 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:56:33.086348    5063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:56:33.198237    5063 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:113 SystemTime:2024-02-16 16:56:33.188081645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:56:33.241970    5063 out.go:177] * Using the docker driver based on existing profile
	I0216 08:56:33.262862    5063 start.go:299] selected driver: docker
	I0216 08:56:33.262875    5063 start.go:903] validating driver "docker" against &{Name:functional-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-060000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:56:33.262977    5063 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 08:56:33.288102    5063 out.go:177] 
	W0216 08:56:33.309088    5063 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0216 08:56:33.329879    5063 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-060000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-060000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (670.664163ms)

                                                
                                                
-- stdout --
	* [functional-060000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 08:56:32.121328    5041 out.go:291] Setting OutFile to fd 1 ...
	I0216 08:56:32.121514    5041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:56:32.121519    5041 out.go:304] Setting ErrFile to fd 2...
	I0216 08:56:32.121523    5041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 08:56:32.121741    5041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 08:56:32.123666    5041 out.go:298] Setting JSON to false
	I0216 08:56:32.154712    5041 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1563,"bootTime":1708101029,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0216 08:56:32.154945    5041 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0216 08:56:32.177424    5041 out.go:177] * [functional-060000] minikube v1.32.0 sur Darwin 14.3.1
	I0216 08:56:32.219987    5041 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 08:56:32.220028    5041 notify.go:220] Checking for updates...
	I0216 08:56:32.263097    5041 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	I0216 08:56:32.284066    5041 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0216 08:56:32.326005    5041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 08:56:32.346916    5041 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	I0216 08:56:32.368106    5041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 08:56:32.389833    5041 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 08:56:32.390728    5041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 08:56:32.448781    5041 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0216 08:56:32.448945    5041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 08:56:32.557984    5041 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:113 SystemTime:2024-02-16 16:56:32.547831219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0216 08:56:32.599967    5041 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0216 08:56:32.621139    5041 start.go:299] selected driver: docker
	I0216 08:56:32.621160    5041 start.go:903] validating driver "docker" against &{Name:functional-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-060000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 08:56:32.621334    5041 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 08:56:32.649014    5041 out.go:177] 
	W0216 08:56:32.670128    5041 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0216 08:56:32.691057    5041 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 addons list
functional_test.go:1686: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 addons list: (1.945268415s)
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fd1925e6-f660-4927-81cc-0a1b9e05f3b4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004804064s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-060000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-060000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-060000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-060000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02c48285-4eb5-4a99-95a4-93fb9d5b26e2] Pending
helpers_test.go:344: "sp-pod" [02c48285-4eb5-4a99-95a4-93fb9d5b26e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02c48285-4eb5-4a99-95a4-93fb9d5b26e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.003947385s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-060000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-060000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-060000 delete -f testdata/storage-provisioner/pod.yaml: (1.166331987s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-060000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8fee2ec-aa27-42e4-82d7-5043265f914b] Pending
helpers_test.go:344: "sp-pod" [f8fee2ec-aa27-42e4-82d7-5043265f914b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8fee2ec-aa27-42e4-82d7-5043265f914b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004202507s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-060000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh -n functional-060000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cp functional-060000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd1153173268/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh -n functional-060000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0216 08:54:23.292146    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh -n functional-060000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (118.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-060000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hdqjd" [944e39d3-019e-49b3-98f7-69f8bb37977e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hdqjd" [944e39d3-019e-49b3-98f7-69f8bb37977e] Running
E0216 08:55:45.211261    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m50.030037834s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;": exit status 1 (214.126217ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;": exit status 1 (130.523754ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;": exit status 1 (126.081338ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;": exit status 1 (120.221264ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-060000 exec mysql-859648c796-hdqjd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (118.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2151/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /etc/test/nested/copy/2151/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2151.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /etc/ssl/certs/2151.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2151.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /usr/share/ca-certificates/2151.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/21512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /etc/ssl/certs/21512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/21512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /usr/share/ca-certificates/21512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-060000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh "sudo systemctl is-active crio": exit status 1 (442.174134ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.553576633s)
--- PASS: TestFunctional/parallel/License (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-060000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-060000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-060000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-060000 image ls --format short --alsologtostderr:
I0216 08:56:49.047342    5177 out.go:291] Setting OutFile to fd 1 ...
I0216 08:56:49.047901    5177 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:49.047909    5177 out.go:304] Setting ErrFile to fd 2...
I0216 08:56:49.047915    5177 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:49.048107    5177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:56:49.048793    5177 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:49.048916    5177 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:49.049314    5177 cli_runner.go:164] Run: docker container inspect functional-060000 --format={{.State.Status}}
I0216 08:56:49.102543    5177 ssh_runner.go:195] Run: systemctl --version
I0216 08:56:49.102616    5177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060000
I0216 08:56:49.154760    5177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50071 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/functional-060000/id_rsa Username:docker}
I0216 08:56:49.247181    5177 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-060000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-060000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-060000 | 9ede528790263 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-060000 | 29d27a074ca12 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-060000 image ls --format table --alsologtostderr:
I0216 08:56:55.136514    5220 out.go:291] Setting OutFile to fd 1 ...
I0216 08:56:55.136816    5220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:55.136823    5220 out.go:304] Setting ErrFile to fd 2...
I0216 08:56:55.136829    5220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:55.137110    5220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:56:55.137738    5220 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:55.137836    5220 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:55.138234    5220 cli_runner.go:164] Run: docker container inspect functional-060000 --format={{.State.Status}}
I0216 08:56:55.193089    5220 ssh_runner.go:195] Run: systemctl --version
I0216 08:56:55.193176    5220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060000
I0216 08:56:55.246850    5220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50071 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/functional-060000/id_rsa Username:docker}
I0216 08:56:55.340846    5220 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-060000 image ls --format json --alsologtostderr:
[{"id":"9ede528790263997da9d09917ca487a511c4386d4d6131a98e4476474aa9ec58","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-060000"],"size":"30"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d
8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"29d27a074ca12952ac512abc854b0451c738a9656c3db04c6d28eb8fd4d31d1d","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-060000"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/ku
bernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functi
onal-060000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-060000 image ls --format json --alsologtostderr:
I0216 08:56:54.831043    5214 out.go:291] Setting OutFile to fd 1 ...
I0216 08:56:54.831215    5214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:54.831221    5214 out.go:304] Setting ErrFile to fd 2...
I0216 08:56:54.831226    5214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:54.831407    5214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:56:54.832071    5214 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:54.832172    5214 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:54.832636    5214 cli_runner.go:164] Run: docker container inspect functional-060000 --format={{.State.Status}}
I0216 08:56:54.885584    5214 ssh_runner.go:195] Run: systemctl --version
I0216 08:56:54.885678    5214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060000
I0216 08:56:54.939177    5214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50071 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/functional-060000/id_rsa Username:docker}
I0216 08:56:55.033068    5214 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-060000 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-060000
size: "32900000"
- id: 9ede528790263997da9d09917ca487a511c4386d4d6131a98e4476474aa9ec58
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-060000
size: "30"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-060000 image ls --format yaml --alsologtostderr:
I0216 08:56:49.352344    5183 out.go:291] Setting OutFile to fd 1 ...
I0216 08:56:49.352649    5183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:49.352655    5183 out.go:304] Setting ErrFile to fd 2...
I0216 08:56:49.352660    5183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:49.352848    5183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:56:49.353566    5183 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:49.353694    5183 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:49.354086    5183 cli_runner.go:164] Run: docker container inspect functional-060000 --format={{.State.Status}}
I0216 08:56:49.406685    5183 ssh_runner.go:195] Run: systemctl --version
I0216 08:56:49.406759    5183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060000
I0216 08:56:49.461618    5183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50071 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/functional-060000/id_rsa Username:docker}
I0216 08:56:49.554603    5183 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh pgrep buildkitd: exit status 1 (380.245564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image build -t localhost/my-image:functional-060000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image build -t localhost/my-image:functional-060000 testdata/build --alsologtostderr: (4.493480847s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-060000 image build -t localhost/my-image:functional-060000 testdata/build --alsologtostderr:
I0216 08:56:50.041094    5202 out.go:291] Setting OutFile to fd 1 ...
I0216 08:56:50.041393    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:50.041400    5202 out.go:304] Setting ErrFile to fd 2...
I0216 08:56:50.041404    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:56:50.041627    5202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:56:50.042329    5202 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:50.043915    5202 config.go:182] Loaded profile config "functional-060000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 08:56:50.044383    5202 cli_runner.go:164] Run: docker container inspect functional-060000 --format={{.State.Status}}
I0216 08:56:50.098440    5202 ssh_runner.go:195] Run: systemctl --version
I0216 08:56:50.098509    5202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-060000
I0216 08:56:50.150855    5202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50071 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/functional-060000/id_rsa Username:docker}
I0216 08:56:50.245805    5202 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3591641979.tar
I0216 08:56:50.245888    5202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0216 08:56:50.261627    5202 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3591641979.tar
I0216 08:56:50.266192    5202 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3591641979.tar: stat -c "%s %y" /var/lib/minikube/build/build.3591641979.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3591641979.tar': No such file or directory
I0216 08:56:50.266224    5202 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3591641979.tar --> /var/lib/minikube/build/build.3591641979.tar (3072 bytes)
I0216 08:56:50.309395    5202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3591641979
I0216 08:56:50.325321    5202 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3591641979 -xf /var/lib/minikube/build/build.3591641979.tar
I0216 08:56:50.342970    5202 docker.go:360] Building image: /var/lib/minikube/build/build.3591641979
I0216 08:56:50.343033    5202 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-060000 /var/lib/minikube/build/build.3591641979
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:29d27a074ca12952ac512abc854b0451c738a9656c3db04c6d28eb8fd4d31d1d done
#8 naming to localhost/my-image:functional-060000 done
#8 DONE 0.0s
I0216 08:56:54.416276    5202 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-060000 /var/lib/minikube/build/build.3591641979: (4.073286857s)
I0216 08:56:54.416354    5202 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3591641979
I0216 08:56:54.431824    5202 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3591641979.tar
I0216 08:56:54.448126    5202 build_images.go:207] Built localhost/my-image:functional-060000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3591641979.tar
I0216 08:56:54.448154    5202 build_images.go:123] succeeded building to: functional-060000
I0216 08:56:54.448158    5202 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.648400639s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-060000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-060000 docker-env) && out/minikube-darwin-amd64 status -p functional-060000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-060000 docker-env) && out/minikube-darwin-amd64 status -p functional-060000": (1.237683567s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-060000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr: (4.622274599s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr: (2.680258981s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.404592004s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-060000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image load --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr: (3.450346605s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image save gcr.io/google-containers/addon-resizer:functional-060000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image save gcr.io/google-containers/addon-resizer:functional-060000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.195154716s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image rm gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.779229032s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-060000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 image save --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 image save --daemon gcr.io/google-containers/addon-resizer:functional-060000 --alsologtostderr: (1.211994732s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-060000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4518: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (55.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-060000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d74993e6-23db-4de5-bfa6-30ab3b9ce4bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d74993e6-23db-4de5-bfa6-30ab3b9ce4bb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 55.004663865s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (55.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-060000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-060000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4548: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-060000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-060000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bmnnc" [969508d6-79f4-466b-8174-0031f150673d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bmnnc" [969508d6-79f4-466b-8174-0031f150673d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003427704s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "420.758753ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "78.746941ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "433.590327ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "85.154351ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 service list
functional_test.go:1455: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 service list: (1.182588601s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1801063383/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708102574226855000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1801063383/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708102574226855000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1801063383/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708102574226855000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1801063383/001/test-1708102574226855000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (418.993542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 16 16:56 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 16 16:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 16 16:56 test-1708102574226855000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh cat /mount-9p/test-1708102574226855000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-060000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [96e1894c-0738-4840-9344-7e67887eea0c] Pending
helpers_test.go:344: "busybox-mount" [96e1894c-0738-4840-9344-7e67887eea0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [96e1894c-0738-4840-9344-7e67887eea0c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [96e1894c-0738-4840-9344-7e67887eea0c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.005129514s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-060000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1801063383/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-darwin-amd64 -p functional-060000 service list -o json: (1.159413084s)
functional_test.go:1490: Took "1.159502198s" to run "out/minikube-darwin-amd64 -p functional-060000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 service --namespace=default --https --url hello-node: signal: killed (15.002622804s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50438

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50438
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3122127239/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (445.842604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3122127239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh "sudo umount -f /mount-9p": exit status 1 (381.750281ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-060000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3122127239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T" /mount1: exit status 1 (499.578078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-060000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-060000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4207187924/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 service hello-node --url --format={{.IP}}: signal: killed (15.00506981s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-060000 service hello-node --url
2024/02/16 08:56:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-060000 service hello-node --url: signal: killed (15.001977466s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50545

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50545
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-060000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-060000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-060000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-374000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-374000 --driver=docker : (22.682649056s)
--- PASS: TestImageBuild/serial/Setup (22.68s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-374000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-374000: (4.288976555s)
--- PASS: TestImageBuild/serial/NormalBuild (4.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-374000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-374000: (1.294046048s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-374000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-374000: (1.064710317s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-374000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-374000: (1.087200947s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-800000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-800000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (39.958277529s)
--- PASS: TestJSONOutput/start/Command (39.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-800000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-800000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-800000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-800000 --output=json --user=testUser: (10.773218358s)
--- PASS: TestJSONOutput/stop/Command (10.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-213000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-213000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (390.852157ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c5827000-ba61-4fb2-9c96-e1b5199585e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-213000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12ef1549-6378-4aae-830b-516248d6e4fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"bbd4881d-019c-441d-a76d-7a7fe735090f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig"}}
	{"specversion":"1.0","id":"9e071eba-46fd-46ba-97e6-e30a29f02cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"dd6825a4-13fe-4641-a0f4-1a9ef27a68bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75734167-7415-4a71-8f44-560fb9d612cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube"}}
	{"specversion":"1.0","id":"b57faf33-ec08-48b9-ac03-51ac900a9302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c6fb529-d264-4cff-819d-e2fb92f95df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-213000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-996000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-996000 --network=: (22.735823447s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-996000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-996000: (2.290505696s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-808000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-808000 --network=bridge: (22.981079264s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-808000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-808000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-808000: (2.257118174s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.29s)

                                                
                                    
x
+
TestKicExistingNetwork (24.96s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-084000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-084000 --network=existing-network: (22.502934998s)
helpers_test.go:175: Cleaning up "existing-network-084000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-084000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-084000: (2.111040772s)
--- PASS: TestKicExistingNetwork (24.96s)

                                                
                                    
x
+
TestKicCustomSubnet (25.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-732000 --subnet=192.168.60.0/24
E0216 09:08:01.394627    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-732000 --subnet=192.168.60.0/24: (23.231161086s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-732000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-732000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-732000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-732000: (2.451029591s)
--- PASS: TestKicCustomSubnet (25.74s)

                                                
                                    
x
+
TestKicStaticIP (24.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-564000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-564000 --static-ip=192.168.200.200: (22.309181905s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-564000 ip
helpers_test.go:175: Cleaning up "static-ip-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-564000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-564000: (2.431206724s)
--- PASS: TestKicStaticIP (24.98s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (52.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-536000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-536000 --driver=docker : (22.641000199s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-538000 --driver=docker 
E0216 09:08:59.606896    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-538000 --driver=docker : (22.970143261s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-536000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-538000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-538000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-538000: (2.497556795s)
helpers_test.go:175: Cleaning up "first-536000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-536000
E0216 09:09:24.437764    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-536000: (2.586103902s)
--- PASS: TestMinikubeProfile (52.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-234000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-234000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.085095385s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-234000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-248000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-248000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.080029043s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-248000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-234000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-234000 --alsologtostderr -v=5: (2.07222623s)
--- PASS: TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-248000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-248000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-248000: (1.553199457s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-248000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-248000: (7.948694141s)
--- PASS: TestMountStart/serial/RestartStopped (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-248000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-183000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-183000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m4.733313474s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-183000 -- rollout status deployment/busybox: (7.022169274s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-86gzj -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-thv6n -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-86gzj -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-thv6n -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-86gzj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-thv6n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-86gzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-86gzj -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-thv6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-183000 -- exec busybox-5b5d89c9d6-thv6n -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-183000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-183000 -v 3 --alsologtostderr: (15.14964702s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr: (1.136367668s)
--- PASS: TestMultiNode/serial/AddNode (16.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-183000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp testdata/cp-test.txt multinode-183000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4274954469/001/cp-test_multinode-183000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000:/home/docker/cp-test.txt multinode-183000-m02:/home/docker/cp-test_multinode-183000_multinode-183000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test_multinode-183000_multinode-183000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000:/home/docker/cp-test.txt multinode-183000-m03:/home/docker/cp-test_multinode-183000_multinode-183000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test_multinode-183000_multinode-183000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp testdata/cp-test.txt multinode-183000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4274954469/001/cp-test_multinode-183000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m02:/home/docker/cp-test.txt multinode-183000:/home/docker/cp-test_multinode-183000-m02_multinode-183000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test_multinode-183000-m02_multinode-183000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m02:/home/docker/cp-test.txt multinode-183000-m03:/home/docker/cp-test_multinode-183000-m02_multinode-183000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test_multinode-183000-m02_multinode-183000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp testdata/cp-test.txt multinode-183000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4274954469/001/cp-test_multinode-183000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m03:/home/docker/cp-test.txt multinode-183000:/home/docker/cp-test_multinode-183000-m03_multinode-183000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000 "sudo cat /home/docker/cp-test_multinode-183000-m03_multinode-183000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 cp multinode-183000-m03:/home/docker/cp-test.txt multinode-183000-m02:/home/docker/cp-test_multinode-183000-m03_multinode-183000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 ssh -n multinode-183000-m02 "sudo cat /home/docker/cp-test_multinode-183000-m03_multinode-183000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 node stop m03: (1.508935506s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-183000 status: exit status 7 (746.93451ms)

                                                
                                                
-- stdout --
	multinode-183000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr: exit status 7 (756.825267ms)

                                                
                                                
-- stdout --
	multinode-183000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:12:19.990896    8631 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:12:19.991074    8631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:12:19.991080    8631 out.go:304] Setting ErrFile to fd 2...
	I0216 09:12:19.991085    8631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:12:19.991268    8631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:12:19.991449    8631 out.go:298] Setting JSON to false
	I0216 09:12:19.991471    8631 mustload.go:65] Loading cluster: multinode-183000
	I0216 09:12:19.991505    8631 notify.go:220] Checking for updates...
	I0216 09:12:19.991788    8631 config.go:182] Loaded profile config "multinode-183000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:12:19.991799    8631 status.go:255] checking status of multinode-183000 ...
	I0216 09:12:19.992213    8631 cli_runner.go:164] Run: docker container inspect multinode-183000 --format={{.State.Status}}
	I0216 09:12:20.045350    8631 status.go:330] multinode-183000 host status = "Running" (err=<nil>)
	I0216 09:12:20.045408    8631 host.go:66] Checking if "multinode-183000" exists ...
	I0216 09:12:20.045670    8631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183000
	I0216 09:12:20.098379    8631 host.go:66] Checking if "multinode-183000" exists ...
	I0216 09:12:20.098648    8631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:12:20.098722    8631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183000
	I0216 09:12:20.151976    8631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50935 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/multinode-183000/id_rsa Username:docker}
	I0216 09:12:20.246876    8631 ssh_runner.go:195] Run: systemctl --version
	I0216 09:12:20.251561    8631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:12:20.271215    8631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-183000
	I0216 09:12:20.324274    8631 kubeconfig.go:92] found "multinode-183000" server: "https://127.0.0.1:50934"
	I0216 09:12:20.324305    8631 api_server.go:166] Checking apiserver status ...
	I0216 09:12:20.324347    8631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 09:12:20.341478    8631 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2290/cgroup
	W0216 09:12:20.358217    8631 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2290/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0216 09:12:20.358316    8631 ssh_runner.go:195] Run: ls
	I0216 09:12:20.363716    8631 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50934/healthz ...
	I0216 09:12:20.369621    8631 api_server.go:279] https://127.0.0.1:50934/healthz returned 200:
	ok
	I0216 09:12:20.369640    8631 status.go:421] multinode-183000 apiserver status = Running (err=<nil>)
	I0216 09:12:20.369655    8631 status.go:257] multinode-183000 status: &{Name:multinode-183000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 09:12:20.369673    8631 status.go:255] checking status of multinode-183000-m02 ...
	I0216 09:12:20.369986    8631 cli_runner.go:164] Run: docker container inspect multinode-183000-m02 --format={{.State.Status}}
	I0216 09:12:20.422463    8631 status.go:330] multinode-183000-m02 host status = "Running" (err=<nil>)
	I0216 09:12:20.422484    8631 host.go:66] Checking if "multinode-183000-m02" exists ...
	I0216 09:12:20.422712    8631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183000-m02
	I0216 09:12:20.474265    8631 host.go:66] Checking if "multinode-183000-m02" exists ...
	I0216 09:12:20.474511    8631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 09:12:20.474565    8631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183000-m02
	I0216 09:12:20.526312    8631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50974 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/multinode-183000-m02/id_rsa Username:docker}
	I0216 09:12:20.618887    8631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 09:12:20.635675    8631 status.go:257] multinode-183000-m02 status: &{Name:multinode-183000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0216 09:12:20.635705    8631 status.go:255] checking status of multinode-183000-m03 ...
	I0216 09:12:20.635956    8631 cli_runner.go:164] Run: docker container inspect multinode-183000-m03 --format={{.State.Status}}
	I0216 09:12:20.688892    8631 status.go:330] multinode-183000-m03 host status = "Stopped" (err=<nil>)
	I0216 09:12:20.688919    8631 status.go:343] host is not running, skipping remaining checks
	I0216 09:12:20.688929    8631 status.go:257] multinode-183000-m03 status: &{Name:multinode-183000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 node start m03 --alsologtostderr: (12.332711931s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status
multinode_test.go:289: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 status: (1.000993671s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-183000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-183000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-183000: (22.922345558s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-183000 --wait=true -v=8 --alsologtostderr
E0216 09:13:01.392328    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:13:59.604742    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-183000 --wait=true -v=8 --alsologtostderr: (1m17.481897024s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-183000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 node delete m03: (5.101454223s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-183000 stop: (21.591067823s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-183000 status: exit status 7 (155.918794ms)

                                                
                                                
-- stdout --
	multinode-183000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr: exit status 7 (157.251254ms)

                                                
                                                
-- stdout --
	multinode-183000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 09:14:42.472498    9115 out.go:291] Setting OutFile to fd 1 ...
	I0216 09:14:42.472677    9115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:14:42.472682    9115 out.go:304] Setting ErrFile to fd 2...
	I0216 09:14:42.472686    9115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 09:14:42.472878    9115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
	I0216 09:14:42.473060    9115 out.go:298] Setting JSON to false
	I0216 09:14:42.473086    9115 mustload.go:65] Loading cluster: multinode-183000
	I0216 09:14:42.473117    9115 notify.go:220] Checking for updates...
	I0216 09:14:42.473403    9115 config.go:182] Loaded profile config "multinode-183000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 09:14:42.473413    9115 status.go:255] checking status of multinode-183000 ...
	I0216 09:14:42.473806    9115 cli_runner.go:164] Run: docker container inspect multinode-183000 --format={{.State.Status}}
	I0216 09:14:42.524077    9115 status.go:330] multinode-183000 host status = "Stopped" (err=<nil>)
	I0216 09:14:42.524101    9115 status.go:343] host is not running, skipping remaining checks
	I0216 09:14:42.524108    9115 status.go:257] multinode-183000 status: &{Name:multinode-183000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 09:14:42.524136    9115 status.go:255] checking status of multinode-183000-m02 ...
	I0216 09:14:42.524388    9115 cli_runner.go:164] Run: docker container inspect multinode-183000-m02 --format={{.State.Status}}
	I0216 09:14:42.574874    9115 status.go:330] multinode-183000-m02 host status = "Stopped" (err=<nil>)
	I0216 09:14:42.574919    9115 status.go:343] host is not running, skipping remaining checks
	I0216 09:14:42.574929    9115 status.go:257] multinode-183000-m02 status: &{Name:multinode-183000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (63.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-183000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0216 09:15:22.646189    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-183000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m3.030914284s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-183000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (63.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-183000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-183000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-183000-m02 --driver=docker : exit status 14 (543.684562ms)

                                                
                                                
-- stdout --
	* [multinode-183000-m02] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-183000-m02' is duplicated with machine name 'multinode-183000-m02' in profile 'multinode-183000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-183000-m03 --driver=docker 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-183000-m03 --driver=docker : (22.34215059s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-183000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-183000: exit status 80 (489.422684ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-183000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-183000-m03 already exists in multinode-183000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-183000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-183000-m03: (2.491259689s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.93s)

                                                
                                    
x
+
TestPreload (176.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-987000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-987000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m35.672895834s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-987000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-987000 image pull gcr.io/k8s-minikube/busybox: (5.326823875s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-987000
E0216 09:18:01.388165    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-987000: (10.810527341s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-987000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0216 09:18:59.601034    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-987000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m2.112674402s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-987000 image list
helpers_test.go:175: Cleaning up "test-preload-987000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-987000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-987000: (2.587531692s)
--- PASS: TestPreload (176.81s)

                                                
                                    
x
+
TestScheduledStopUnix (95.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-455000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-455000 --memory=2048 --driver=docker : (21.569164611s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-455000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-455000 -n scheduled-stop-455000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-455000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-455000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-455000 -n scheduled-stop-455000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-455000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-455000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-455000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-455000: exit status 7 (108.146082ms)

                                                
                                                
-- stdout --
	scheduled-stop-455000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-455000 -n scheduled-stop-455000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-455000 -n scheduled-stop-455000: exit status 7 (107.269809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-455000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-455000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-455000: (2.161758744s)
--- PASS: TestScheduledStopUnix (95.71s)

                                                
                                    
x
+
TestInsufficientStorage (10.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-242000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-242000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.687189575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"74653751-75a1-4177-b180-ab52a7c62680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-242000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"60a66d85-ddec-4d5d-b1bc-69fd0c6ceda0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"8a367462-d9ec-4e17-b11a-0c55f30daeb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig"}}
	{"specversion":"1.0","id":"ee79ca93-84a5-433c-8e4a-e57fb4ae3d02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"0fe2ec7a-d5d8-4b85-a29b-db7e2b7fc092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7faf647-2937-4451-8805-fba1c193c537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube"}}
	{"specversion":"1.0","id":"5ad5aa83-3f7e-472b-a718-6c0f05349e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ef9bcb9-8ba1-4a27-8a81-4a122e8f845d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b6cf26f-fb16-47ea-8edf-5625c5a38882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"45b2d9ed-0198-4169-9d34-b5d7692e04c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2563725-e493-48d8-9235-b94a4df760c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d19fdf09-cbd0-45ea-bfd7-e45f3192c91c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-242000 in cluster insufficient-storage-242000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"32dd6d60-1c72-4046-a1c5-a7588084df27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1ae563-0cf2-4160-b7c1-39c1add67c4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b3c1acb-331b-4ddc-a34c-329b766a18da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-242000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-242000 --output=json --layout=cluster: exit status 7 (394.058441ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-242000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-242000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:26:21.119615   10595 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-242000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-242000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-242000 --output=json --layout=cluster: exit status 7 (398.455549ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-242000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-242000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 09:26:21.518535   10605 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-242000" does not appear in /Users/jenkins/minikube-integration/17936-1021/kubeconfig
	E0216 09:26:21.535264   10605 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/insufficient-storage-242000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-242000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-242000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-242000: (2.256326444s)
--- PASS: TestInsufficientStorage (10.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3688316712 start -p running-upgrade-763000 --memory=2200 --vm-driver=docker 
E0216 09:28:59.610081    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3688316712 start -p running-upgrade-763000 --memory=2200 --vm-driver=docker : (2m22.19496175s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (42.255442043s)
helpers_test.go:175: Cleaning up "running-upgrade-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-763000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-763000: (2.835888361s)
--- PASS: TestRunningBinaryUpgrade (192.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (109.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3830412933 start -p missing-upgrade-161000 --memory=2200 --driver=docker 
version_upgrade_test.go:309: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3830412933 start -p missing-upgrade-161000 --memory=2200 --driver=docker : (33.300091249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-161000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-161000: (10.259797021s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-161000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-161000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0216 09:33:01.393814    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-161000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (58.299468619s)
helpers_test.go:175: Cleaning up "missing-upgrade-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-161000: (2.472774196s)
--- PASS: TestMissingContainerUpgrade (109.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17936
- KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1861212888/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1861212888/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1861212888/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1861212888/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.02s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.9s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17936
- KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current661390826/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current661390826/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current661390826/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current661390826/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.4272357053 start -p stopped-upgrade-093000 --memory=2200 --vm-driver=docker 
E0216 09:33:59.607320    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.4272357053 start -p stopped-upgrade-093000 --memory=2200 --vm-driver=docker : (30.862108312s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.4272357053 -p stopped-upgrade-093000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.4272357053 -p stopped-upgrade-093000 stop: (12.35169804s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-093000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-093000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (32.508282034s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (75.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-093000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-093000: (3.202733782s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.20s)

                                                
                                    
x
+
TestPause/serial/Start (74.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-119000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-119000 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m14.805225332s)
--- PASS: TestPause/serial/Start (74.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-119000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-119000 --alsologtostderr -v=1 --driver=docker : (40.749337691s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-119000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-119000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-119000 --output=json --layout=cluster: exit status 2 (410.323496ms)

                                                
                                                
-- stdout --
	{"Name":"pause-119000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-119000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-119000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-119000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-119000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-119000 --alsologtostderr -v=5: (2.470571307s)
--- PASS: TestPause/serial/DeletePaused (2.47s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (15.910037873s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-119000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-119000: exit status 1 (49.609623ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-119000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (398.942795ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-080000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=17936
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-080000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-080000 --driver=docker : (22.851629253s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-080000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --driver=docker : (6.009192489s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-080000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-080000 status -o json: exit status 2 (400.134325ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-080000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-080000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-080000: (2.243358336s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-080000 --no-kubernetes --driver=docker : (7.280868407s)
--- PASS: TestNoKubernetes/serial/Start (7.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-080000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-080000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.559236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-080000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-080000: (1.557238662s)
--- PASS: TestNoKubernetes/serial/Stop (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-080000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-080000 --driver=docker : (7.988418649s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-080000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-080000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.15161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (48.296034515s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dxfgf" [996287c8-7317-470d-9771-b679d84c242d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dxfgf" [996287c8-7317-470d-9771-b679d84c242d] Running
E0216 09:38:59.605546    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.005381154s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m6.833335053s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v8np7" [68aff862-0402-41ab-abc2-25c2e1ae6543] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00489046s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qsfh2" [140360fa-ca12-47f2-bec2-5dc9009ca42c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qsfh2" [140360fa-ca12-47f2-bec2-5dc9009ca42c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005236976s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (53.508286484s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (39.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (39.644213848s)
--- PASS: TestNetworkPlugins/group/false/Start (39.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x6msp" [9b8c2dee-81ad-452b-b1b8-2d58b16214d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x6msp" [9b8c2dee-81ad-452b-b1b8-2d58b16214d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.005137561s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fnxtm" [493df7bf-99c7-4070-b61f-5da0e286d82a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fnxtm" [493df7bf-99c7-4070-b61f-5da0e286d82a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.005652906s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0216 09:42:44.433367    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.343031544s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0216 09:43:01.388125    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (51.624204746s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rthj7" [56b03d4c-d3da-434a-89a5-a2da44a656fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005782645s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zbz5c" [8ff89e57-070f-4d99-aa06-0f765c24256c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zbz5c" [8ff89e57-070f-4d99-aa06-0f765c24256c] Running
E0216 09:43:51.271325    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.277773    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.288924    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.309070    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.349259    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.429619    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.591363    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:51.911945    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:52.552700    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.006041851s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f5bfc" [6b867631-9762-4287-a7b2-b4a8dd720a3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004998365s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hcwfr" [637b6d3d-4c03-4793-a58e-8524efa16dd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0216 09:43:53.832916    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:56.393588    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:43:59.730825    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:44:01.514219    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hcwfr" [637b6d3d-4c03-4793-a58e-8524efa16dd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.010096607s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (38.793445065s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (38.169145547s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mhwbh" [3ed10c77-3b96-4149-829d-65fcace7a438] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mhwbh" [3ed10c77-3b96-4149-829d-65fcace7a438] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004362012s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5hzvh" [611afde6-8a4b-481d-8a2d-d50acdc8077a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0216 09:45:13.197971    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5hzvh" [611afde6-8a4b-481d-8a2d-d50acdc8077a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.00552024s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0216 09:45:34.786396    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:34.791508    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:34.802896    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:34.823181    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:34.863697    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:34.943951    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:35.104973    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:35.425227    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:36.065676    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:37.346306    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:39.907115    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:45:45.028016    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-862000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (40.308487372s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-862000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-862000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w7sdj" [f43d382f-2582-4454-a1a0-ea92342206d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0216 09:46:15.750120    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-w7sdj" [f43d382f-2582-4454-a1a0-ea92342206d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.003964035s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-862000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-862000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (153.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-575000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0216 09:46:56.711210    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:47:05.100790    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.106635    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.116972    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.137699    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.177859    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.257962    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.418065    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:05.738181    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:06.378455    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:07.658626    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:10.218896    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:14.948209    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:14.954362    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:14.964462    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:14.984626    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:15.026048    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:15.106268    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:15.266432    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:15.339265    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:15.587168    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:16.227870    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:17.508090    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:20.069018    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:25.190182    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:25.581354    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:35.430674    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:47:46.061910    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:47:55.911533    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:48:01.520250    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:48:18.633293    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:48:27.022965    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:48:33.264467    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.270847    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.281980    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.302819    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.405712    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.485887    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.647154    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:33.968904    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:34.609915    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:35.890742    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:36.872780    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:48:38.451974    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:42.779836    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:48:43.572213    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:46.959020    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:46.964190    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:46.975614    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:46.996400    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:47.036572    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:47.116713    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:47.276837    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:47.597635    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:48.237972    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:49.518822    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:51.277021    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:48:52.079256    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:53.813137    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:48:57.199590    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:48:59.735407    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:49:07.440139    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:49:14.293679    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:49:18.963863    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-575000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (2m33.519536772s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (153.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-575000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd285467-5145-4bdf-b310-5052fb8ca1fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0216 09:49:27.920873    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [fd285467-5145-4bdf-b310-5052fb8ca1fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.004122376s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-575000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-575000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-575000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075316312s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-575000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-575000 --alsologtostderr -v=3
E0216 09:49:48.945110    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-575000 --alsologtostderr -v=3: (10.85819592s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-575000 -n no-preload-575000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-575000 -n no-preload-575000: exit status 7 (107.259928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-575000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-575000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0216 09:49:55.254660    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:49:56.428495    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.434031    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.444266    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.464346    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.505176    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.585453    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:56.746090    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:57.066535    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:57.707098    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:49:58.794669    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:49:58.988355    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:50:01.548620    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:50:06.670797    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 09:50:08.881864    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-575000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m36.71904136s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-575000 -n no-preload-575000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-356000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-356000 --alsologtostderr -v=3: (1.558059992s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-356000 -n old-k8s-version-356000: exit status 7 (110.227342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-356000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6mw8l" [69d07b6c-e6ba-4e3d-aa04-b841e4a2e1a5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0216 09:55:34.798072    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/calico-862000/client.crt: no such file or directory
E0216 09:55:39.630246    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6mw8l" [69d07b6c-e6ba-4e3d-aa04-b841e4a2e1a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.00529462s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6mw8l" [69d07b6c-e6ba-4e3d-aa04-b841e4a2e1a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004032462s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-575000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-575000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-575000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-575000 -n no-preload-575000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-575000 -n no-preload-575000: exit status 2 (440.913516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-575000 -n no-preload-575000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-575000 -n no-preload-575000: exit status 2 (429.911051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-575000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-575000 -n no-preload-575000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-575000 -n no-preload-575000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-944000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0216 09:56:15.128512    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-944000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (37.495231848s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-944000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d2610a41-c1f0-4b9d-a984-8845b8864733] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0216 09:56:42.813813    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kubenet-862000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d2610a41-c1f0-4b9d-a984-8845b8864733] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004012129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-944000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-944000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-944000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.326280082s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-944000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-944000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-944000 --alsologtostderr -v=3: (10.998522381s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-944000 -n embed-certs-944000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-944000 -n embed-certs-944000: exit status 7 (108.302688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-944000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (314.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-944000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0216 09:57:05.112670    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
E0216 09:57:14.961215    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
E0216 09:58:01.531975    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:58:33.276336    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 09:58:46.971257    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
E0216 09:58:51.289871    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
E0216 09:58:59.748658    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:59:24.579125    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 09:59:24.815644    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:24.821031    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:24.832186    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:24.852372    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:24.893338    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:24.973649    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:25.134190    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:25.455126    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:26.096395    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:27.377198    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:29.938684    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:35.059204    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:45.299724    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 09:59:56.439234    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/enable-default-cni-862000/client.crt: no such file or directory
E0216 10:00:05.781123    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
E0216 10:00:11.945159    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/bridge-862000/client.crt: no such file or directory
E0216 10:00:14.338174    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/auto-862000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-944000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (5m13.936540235s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-944000 -n embed-certs-944000
E0216 10:02:14.965769    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (314.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fprvx" [eba1f053-7325-455f-8827-ade9e5464b57] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fprvx" [eba1f053-7325-455f-8827-ade9e5464b57] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004828682s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fprvx" [eba1f053-7325-455f-8827-ade9e5464b57] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003809991s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-944000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-944000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-944000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-944000 -n embed-certs-944000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-944000 -n embed-certs-944000: exit status 2 (428.004267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-944000 -n embed-certs-944000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-944000 -n embed-certs-944000: exit status 2 (429.127567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-944000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-944000 -n embed-certs-944000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-944000 -n embed-certs-944000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-768000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-768000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (38.472781932s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-768000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40d6d83a-0d02-48a3-b360-876771ce1cc3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [40d6d83a-0d02-48a3-b360-876771ce1cc3] Running
E0216 10:03:28.162154    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/false-862000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.006803142s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-768000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-768000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-768000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.371333835s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-768000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-768000 --alsologtostderr -v=3
E0216 10:03:33.281893    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/kindnet-862000/client.crt: no such file or directory
E0216 10:03:38.013057    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/custom-flannel-862000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-768000 --alsologtostderr -v=3: (10.880150715s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000: exit status 7 (107.716963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-768000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-768000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
E0216 10:03:46.976618    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/flannel-862000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-768000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m32.459174614s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n5mmf" [027b4b67-a0a9-4122-aaa1-19be8b58c2d4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n5mmf" [027b4b67-a0a9-4122-aaa1-19be8b58c2d4] Running
E0216 10:09:24.827693    2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/no-preload-575000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005092634s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n5mmf" [027b4b67-a0a9-4122-aaa1-19be8b58c2d4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00507998s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-768000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-768000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-768000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000: exit status 2 (438.744412ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000: exit status 2 (430.698539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-768000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-768000 -n default-k8s-diff-port-768000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (35.007507543s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-047000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-047000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187627544s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-047000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-047000 --alsologtostderr -v=3: (10.890138658s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000: exit status 7 (107.339042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-047000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-047000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (29.229890766s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-047000 -n newest-cni-047000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-047000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-047000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000: exit status 2 (429.341354ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000: exit status 2 (422.77595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-047000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-047000 -n newest-cni-047000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-047000 -n newest-cni-047000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    

Test skip (21/333)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.967427ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kxwrq" [bcb10a19-c157-4228-a45b-dfebceacb609] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006632666s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-54qxw" [11e1d52d-769d-4532-adb2-3c1154aaa5c2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005216867s
addons_test.go:340: (dbg) Run:  kubectl --context addons-983000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-983000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-983000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.062319684s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (19.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-983000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-983000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-983000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e3897e0a-0528-4412-bc3c-d037b432d135] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e3897e0a-0528-4412-bc3c-d037b432d135] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005874034s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-983000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.92s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-060000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-060000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-66n6s" [a21b2daa-ae3a-458f-ae84-6f1dee5663da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-66n6s" [a21b2daa-ae3a-458f-ae84-6f1dee5663da] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.0052642s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-862000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-862000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-862000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-862000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-862000"

                                                
                                                
----------------------- debugLogs end: cilium-862000 [took: 6.19969467s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-862000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-862000
--- SKIP: TestNetworkPlugins/group/cilium (6.66s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-835000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-835000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard