Test Report: Docker_macOS 18165

                    
                      21e5735d41df0fbfa8402e4459b7fe72f1b19e7e:2024-02-13:33133
                    
                

Test fail (12/333)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (276.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0213 18:25:08.036169   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:25:23.828904   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.834101   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.844251   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.864864   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.906050   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.987705   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:24.149195   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:24.470117   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:25.112148   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:26.438675   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:28.999437   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:34.119703   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:44.361907   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:26:04.843648   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:26:45.804198   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:28:07.766263   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.369195389s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-694000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-694000 in cluster ingress-addon-legacy-694000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:24:58.043992   42439 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:24:58.044185   42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:24:58.044192   42439 out.go:304] Setting ErrFile to fd 2...
	I0213 18:24:58.044196   42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:24:58.044374   42439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:24:58.045923   42439 out.go:298] Setting JSON to false
	I0213 18:24:58.068747   42439 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14357,"bootTime":1707863141,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:24:58.068860   42439 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:24:58.093137   42439 out.go:177] * [ingress-addon-legacy-694000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:24:58.155260   42439 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 18:24:58.134091   42439 notify.go:220] Checking for updates...
	I0213 18:24:58.213002   42439 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:24:58.255252   42439 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:24:58.297312   42439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:24:58.339913   42439 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 18:24:58.382058   42439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 18:24:58.403517   42439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:24:58.461504   42439 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:24:58.461638   42439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:24:58.573039   42439 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:24:58.559430847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:24:58.595459   42439 out.go:177] * Using the docker driver based on user configuration
	I0213 18:24:58.617064   42439 start.go:298] selected driver: docker
	I0213 18:24:58.617089   42439 start.go:902] validating driver "docker" against <nil>
	I0213 18:24:58.617102   42439 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 18:24:58.622084   42439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:24:58.734536   42439 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:24:58.724034769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:24:58.734728   42439 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:24:58.734914   42439 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 18:24:58.756377   42439 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 18:24:58.778421   42439 cni.go:84] Creating CNI manager for ""
	I0213 18:24:58.778457   42439 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 18:24:58.778473   42439 start_flags.go:321] config:
	{Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:24:58.800451   42439 out.go:177] * Starting control plane node ingress-addon-legacy-694000 in cluster ingress-addon-legacy-694000
	I0213 18:24:58.843471   42439 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:24:58.865332   42439 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:24:58.907480   42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 18:24:58.907537   42439 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:24:58.962689   42439 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 18:24:58.962716   42439 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 18:24:59.163747   42439 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0213 18:24:59.163795   42439 cache.go:56] Caching tarball of preloaded images
	I0213 18:24:59.164255   42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 18:24:59.208573   42439 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0213 18:24:59.229947   42439 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:24:59.773989   42439 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0213 18:25:16.951876   42439 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:25:16.952075   42439 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:25:17.585007   42439 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0213 18:25:17.585248   42439 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json ...
	I0213 18:25:17.585275   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json: {Name:mkb247d9310fe07a1dc14b022dbbd70c65616aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:17.585579   42439 cache.go:194] Successfully downloaded all kic artifacts
	I0213 18:25:17.585609   42439 start.go:365] acquiring machines lock for ingress-addon-legacy-694000: {Name:mk376ced87b7a2e785d303268517d60c6f604567 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 18:25:17.586469   42439 start.go:369] acquired machines lock for "ingress-addon-legacy-694000" in 846.713µs
	I0213 18:25:17.586515   42439 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 18:25:17.586580   42439 start.go:125] createHost starting for "" (driver="docker")
	I0213 18:25:17.611821   42439 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0213 18:25:17.612167   42439 start.go:159] libmachine.API.Create for "ingress-addon-legacy-694000" (driver="docker")
	I0213 18:25:17.612239   42439 client.go:168] LocalClient.Create starting
	I0213 18:25:17.612443   42439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem
	I0213 18:25:17.612538   42439 main.go:141] libmachine: Decoding PEM data...
	I0213 18:25:17.612570   42439 main.go:141] libmachine: Parsing certificate...
	I0213 18:25:17.612665   42439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem
	I0213 18:25:17.612735   42439 main.go:141] libmachine: Decoding PEM data...
	I0213 18:25:17.612751   42439 main.go:141] libmachine: Parsing certificate...
	I0213 18:25:17.632119   42439 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-694000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 18:25:17.684338   42439 cli_runner.go:211] docker network inspect ingress-addon-legacy-694000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 18:25:17.684466   42439 network_create.go:281] running [docker network inspect ingress-addon-legacy-694000] to gather additional debugging logs...
	I0213 18:25:17.684486   42439 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-694000
	W0213 18:25:17.734945   42439 cli_runner.go:211] docker network inspect ingress-addon-legacy-694000 returned with exit code 1
	I0213 18:25:17.734978   42439 network_create.go:284] error running [docker network inspect ingress-addon-legacy-694000]: docker network inspect ingress-addon-legacy-694000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-694000 not found
	I0213 18:25:17.734994   42439 network_create.go:286] output of [docker network inspect ingress-addon-legacy-694000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-694000 not found
	
	** /stderr **
	I0213 18:25:17.735145   42439 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 18:25:17.786873   42439 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000197550}
	I0213 18:25:17.786918   42439 network_create.go:124] attempt to create docker network ingress-addon-legacy-694000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0213 18:25:17.786997   42439 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 ingress-addon-legacy-694000
	I0213 18:25:17.874681   42439 network_create.go:108] docker network ingress-addon-legacy-694000 192.168.49.0/24 created
	I0213 18:25:17.874727   42439 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-694000" container
	I0213 18:25:17.874841   42439 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 18:25:17.927015   42439 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-694000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --label created_by.minikube.sigs.k8s.io=true
	I0213 18:25:17.978892   42439 oci.go:103] Successfully created a docker volume ingress-addon-legacy-694000
	I0213 18:25:17.979014   42439 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-694000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --entrypoint /usr/bin/test -v ingress-addon-legacy-694000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 18:25:18.358204   42439 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-694000
	I0213 18:25:18.358249   42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 18:25:18.358264   42439 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 18:25:18.358381   42439 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-694000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 18:25:20.597448   42439 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-694000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.239026125s)
	I0213 18:25:20.597476   42439 kic.go:203] duration metric: took 2.239248 seconds to extract preloaded images to volume
	I0213 18:25:20.597612   42439 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 18:25:20.708768   42439 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-694000 --name ingress-addon-legacy-694000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --network ingress-addon-legacy-694000 --ip 192.168.49.2 --volume ingress-addon-legacy-694000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 18:25:20.983883   42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Running}}
	I0213 18:25:21.039626   42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:25:21.097836   42439 cli_runner.go:164] Run: docker exec ingress-addon-legacy-694000 stat /var/lib/dpkg/alternatives/iptables
	I0213 18:25:21.261511   42439 oci.go:144] the created container "ingress-addon-legacy-694000" has a running status.
	I0213 18:25:21.261561   42439 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa...
	I0213 18:25:21.498865   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0213 18:25:21.498933   42439 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 18:25:21.562684   42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:25:21.615938   42439 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 18:25:21.615960   42439 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-694000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 18:25:21.710432   42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:25:21.763849   42439 machine.go:88] provisioning docker machine ...
	I0213 18:25:21.763896   42439 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-694000"
	I0213 18:25:21.764006   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:21.816900   42439 main.go:141] libmachine: Using SSH client type: native
	I0213 18:25:21.817230   42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53836 <nil> <nil>}
	I0213 18:25:21.817247   42439 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-694000 && echo "ingress-addon-legacy-694000" | sudo tee /etc/hostname
	I0213 18:25:21.980755   42439 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-694000
	
	I0213 18:25:21.980899   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:22.032916   42439 main.go:141] libmachine: Using SSH client type: native
	I0213 18:25:22.033232   42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53836 <nil> <nil>}
	I0213 18:25:22.033248   42439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-694000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-694000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-694000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 18:25:22.175626   42439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 18:25:22.175644   42439 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 18:25:22.175662   42439 ubuntu.go:177] setting up certificates
	I0213 18:25:22.175668   42439 provision.go:83] configureAuth start
	I0213 18:25:22.175741   42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
	I0213 18:25:22.227921   42439 provision.go:138] copyHostCerts
	I0213 18:25:22.227967   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 18:25:22.228018   42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 18:25:22.228025   42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 18:25:22.228177   42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 18:25:22.228365   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 18:25:22.228399   42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 18:25:22.228404   42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 18:25:22.228509   42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 18:25:22.228645   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 18:25:22.228691   42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 18:25:22.228696   42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 18:25:22.228785   42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 18:25:22.228922   42439 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-694000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-694000]
	I0213 18:25:22.306228   42439 provision.go:172] copyRemoteCerts
	I0213 18:25:22.306284   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 18:25:22.306342   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:22.358908   42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:25:22.463528   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 18:25:22.463612   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 18:25:22.503323   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 18:25:22.503397   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 18:25:22.543769   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 18:25:22.543908   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 18:25:22.584768   42439 provision.go:86] duration metric: configureAuth took 409.052462ms
	I0213 18:25:22.584807   42439 ubuntu.go:193] setting minikube options for container-runtime
	I0213 18:25:22.585056   42439 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:25:22.585205   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:22.638654   42439 main.go:141] libmachine: Using SSH client type: native
	I0213 18:25:22.638965   42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53836 <nil> <nil>}
	I0213 18:25:22.638982   42439 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 18:25:22.776320   42439 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 18:25:22.776339   42439 ubuntu.go:71] root file system type: overlay
	I0213 18:25:22.776425   42439 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 18:25:22.776506   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:22.828730   42439 main.go:141] libmachine: Using SSH client type: native
	I0213 18:25:22.829034   42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53836 <nil> <nil>}
	I0213 18:25:22.829088   42439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 18:25:22.992527   42439 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 18:25:22.992632   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:23.044840   42439 main.go:141] libmachine: Using SSH client type: native
	I0213 18:25:23.072529   42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53836 <nil> <nil>}
	I0213 18:25:23.072560   42439 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 18:25:23.679644   42439 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 02:25:22.987355255 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 18:25:23.679686   42439 machine.go:91] provisioned docker machine in 1.915829512s
	I0213 18:25:23.679712   42439 client.go:171] LocalClient.Create took 6.067542671s
	I0213 18:25:23.679738   42439 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-694000" took 6.067671783s
	I0213 18:25:23.679746   42439 start.go:300] post-start starting for "ingress-addon-legacy-694000" (driver="docker")
	I0213 18:25:23.679754   42439 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 18:25:23.679866   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 18:25:23.680033   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:23.733912   42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:25:23.837754   42439 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 18:25:23.843236   42439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 18:25:23.843290   42439 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 18:25:23.843299   42439 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 18:25:23.843304   42439 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 18:25:23.843329   42439 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 18:25:23.843435   42439 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 18:25:23.843688   42439 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 18:25:23.843694   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> /etc/ssl/certs/388992.pem
	I0213 18:25:23.843920   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 18:25:23.858311   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:25:23.898179   42439 start.go:303] post-start completed in 218.395299ms
	I0213 18:25:23.899090   42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
	I0213 18:25:23.953064   42439 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json ...
	I0213 18:25:23.953539   42439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 18:25:23.953627   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:24.006354   42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:25:24.100319   42439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 18:25:24.105089   42439 start.go:128] duration metric: createHost completed in 6.518602481s
	I0213 18:25:24.105108   42439 start.go:83] releasing machines lock for "ingress-addon-legacy-694000", held for 6.518725821s
	I0213 18:25:24.105193   42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
	I0213 18:25:24.159183   42439 ssh_runner.go:195] Run: cat /version.json
	I0213 18:25:24.159194   42439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 18:25:24.159264   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:24.159276   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:24.217117   42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:25:24.217117   42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:25:24.412005   42439 ssh_runner.go:195] Run: systemctl --version
	I0213 18:25:24.416555   42439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 18:25:24.421446   42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 18:25:24.463374   42439 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 18:25:24.463464   42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 18:25:24.492815   42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 18:25:24.521300   42439 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 18:25:24.521350   42439 start.go:475] detecting cgroup driver to use...
	I0213 18:25:24.521376   42439 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:25:24.521578   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:25:24.551366   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0213 18:25:24.567601   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 18:25:24.583450   42439 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 18:25:24.583512   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 18:25:24.599058   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:25:24.615116   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 18:25:24.631724   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:25:24.647791   42439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 18:25:24.663538   42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 18:25:24.680281   42439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 18:25:24.695352   42439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 18:25:24.710464   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:25:24.768715   42439 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 18:25:24.854515   42439 start.go:475] detecting cgroup driver to use...
	I0213 18:25:24.854537   42439 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:25:24.854594   42439 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 18:25:24.874401   42439 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 18:25:24.874530   42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 18:25:24.893645   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:25:24.924266   42439 ssh_runner.go:195] Run: which cri-dockerd
	I0213 18:25:24.928471   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 18:25:24.943815   42439 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 18:25:24.974647   42439 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 18:25:25.062237   42439 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 18:25:25.121981   42439 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 18:25:25.122119   42439 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 18:25:25.151842   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:25:25.215379   42439 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 18:25:25.461626   42439 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:25:25.486230   42439 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:25:25.554408   42439 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0213 18:25:25.554537   42439 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-694000 dig +short host.docker.internal
	I0213 18:25:25.676969   42439 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 18:25:25.677060   42439 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 18:25:25.682434   42439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:25:25.700159   42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:25:25.752756   42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 18:25:25.752856   42439 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:25:25.773437   42439 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0213 18:25:25.773463   42439 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 18:25:25.773523   42439 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 18:25:25.788607   42439 ssh_runner.go:195] Run: which lz4
	I0213 18:25:25.792741   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0213 18:25:25.792890   42439 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 18:25:25.797131   42439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 18:25:25.797153   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0213 18:25:32.586326   42439 docker.go:649] Took 6.793567 seconds to copy over tarball
	I0213 18:25:32.586396   42439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 18:25:34.306659   42439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.720270673s)
	I0213 18:25:34.306676   42439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 18:25:34.360742   42439 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 18:25:34.376352   42439 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0213 18:25:34.404653   42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:25:34.466278   42439 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 18:25:35.679557   42439 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.21327802s)
	I0213 18:25:35.679775   42439 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:25:35.699078   42439 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0213 18:25:35.699094   42439 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 18:25:35.699105   42439 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 18:25:35.704813   42439 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 18:25:35.704840   42439 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0213 18:25:35.705484   42439 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:25:35.705493   42439 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0213 18:25:35.705513   42439 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 18:25:35.705556   42439 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 18:25:35.705563   42439 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0213 18:25:35.705586   42439 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 18:25:35.709811   42439 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 18:25:35.710288   42439 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0213 18:25:35.711584   42439 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 18:25:35.711736   42439 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:25:35.711610   42439 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0213 18:25:35.711780   42439 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0213 18:25:35.711817   42439 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 18:25:35.712000   42439 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 18:25:37.615884   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0213 18:25:37.635494   42439 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0213 18:25:37.635533   42439 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0213 18:25:37.635599   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0213 18:25:37.653555   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0213 18:25:37.671259   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0213 18:25:37.690990   42439 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0213 18:25:37.691013   42439 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 18:25:37.691073   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0213 18:25:37.708488   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0213 18:25:37.708773   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0213 18:25:37.727608   42439 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0213 18:25:37.727639   42439 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 18:25:37.727713   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0213 18:25:37.740470   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0213 18:25:37.746329   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0213 18:25:37.747705   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0213 18:25:37.752816   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0213 18:25:37.761508   42439 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0213 18:25:37.761542   42439 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0213 18:25:37.761633   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0213 18:25:37.767353   42439 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0213 18:25:37.767378   42439 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0213 18:25:37.767439   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0213 18:25:37.773118   42439 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0213 18:25:37.773148   42439 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 18:25:37.773212   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0213 18:25:37.784140   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 18:25:37.787773   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0213 18:25:37.788394   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0213 18:25:37.794296   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0213 18:25:37.805346   42439 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0213 18:25:37.805369   42439 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 18:25:37.805427   42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 18:25:37.823091   42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0213 18:25:38.092320   42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:25:38.112816   42439 cache_images.go:92] LoadImages completed in 2.413733817s
	W0213 18:25:38.112885   42439 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0213 18:25:38.112993   42439 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 18:25:38.160506   42439 cni.go:84] Creating CNI manager for ""
	I0213 18:25:38.160523   42439 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 18:25:38.160537   42439 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 18:25:38.160552   42439 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-694000 NodeName:ingress-addon-legacy-694000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 18:25:38.160636   42439 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-694000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 18:25:38.160693   42439 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-694000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 18:25:38.160790   42439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0213 18:25:38.175932   42439 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 18:25:38.176010   42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 18:25:38.190422   42439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0213 18:25:38.218867   42439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0213 18:25:38.247881   42439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0213 18:25:38.277729   42439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0213 18:25:38.282313   42439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:25:38.300244   42439 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000 for IP: 192.168.49.2
	I0213 18:25:38.300326   42439 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.300530   42439 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 18:25:38.300621   42439 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 18:25:38.300674   42439 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key
	I0213 18:25:38.300693   42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt with IP's: []
	I0213 18:25:38.502529   42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt ...
	I0213 18:25:38.502546   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt: {Name:mk45a255dad4dc9ca0803c595f373b3ab70313f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.502910   42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key ...
	I0213 18:25:38.502919   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key: {Name:mkb55906d001aff3c57de380613b5cc14210b0cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.503150   42439 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2
	I0213 18:25:38.503164   42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 18:25:38.755632   42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 ...
	I0213 18:25:38.755646   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2: {Name:mk51d5f9200fbf20e40ef0f598e6720c414689d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.755951   42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2 ...
	I0213 18:25:38.755964   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2: {Name:mk04c4c35039f4e6b18de33df6fe8e2cd297929d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.756171   42439 certs.go:337] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt
	I0213 18:25:38.756359   42439 certs.go:341] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key
	I0213 18:25:38.756517   42439 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key
	I0213 18:25:38.756532   42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt with IP's: []
	I0213 18:25:38.925357   42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt ...
	I0213 18:25:38.925368   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt: {Name:mkbb681791d725d411b674f575b79f85eada4c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.925627   42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key ...
	I0213 18:25:38.925636   42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key: {Name:mkf584986fd7249feec69304158aae04b02db8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:25:38.925840   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 18:25:38.925873   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 18:25:38.925903   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 18:25:38.925920   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 18:25:38.925937   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 18:25:38.925955   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 18:25:38.925978   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 18:25:38.926006   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 18:25:38.926096   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 18:25:38.926145   42439 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 18:25:38.926154   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 18:25:38.926184   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 18:25:38.926213   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 18:25:38.926245   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 18:25:38.926309   42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:25:38.926358   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:25:38.926379   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem -> /usr/share/ca-certificates/38899.pem
	I0213 18:25:38.926397   42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> /usr/share/ca-certificates/388992.pem
	I0213 18:25:38.926894   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 18:25:38.968725   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 18:25:39.009286   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 18:25:39.050795   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 18:25:39.091770   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 18:25:39.132119   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 18:25:39.173413   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 18:25:39.215231   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 18:25:39.256013   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 18:25:39.296533   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 18:25:39.337379   42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 18:25:39.379805   42439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 18:25:39.409029   42439 ssh_runner.go:195] Run: openssl version
	I0213 18:25:39.415123   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 18:25:39.431418   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 18:25:39.435812   42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 18:25:39.435860   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 18:25:39.442399   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 18:25:39.458099   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 18:25:39.474078   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 18:25:39.478872   42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 18:25:39.478930   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 18:25:39.485473   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 18:25:39.501527   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 18:25:39.517611   42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:25:39.522169   42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:25:39.522215   42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:25:39.528689   42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 18:25:39.544510   42439 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 18:25:39.548698   42439 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 18:25:39.548761   42439 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:25:39.548860   42439 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 18:25:39.566426   42439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 18:25:39.581499   42439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 18:25:39.596852   42439 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 18:25:39.596944   42439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 18:25:39.612465   42439 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 18:25:39.612500   42439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 18:25:39.679151   42439 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 18:25:39.679233   42439 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 18:25:39.980055   42439 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 18:25:39.980218   42439 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 18:25:39.980382   42439 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 18:25:40.197775   42439 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 18:25:40.198416   42439 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 18:25:40.198453   42439 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 18:25:40.270262   42439 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 18:25:40.291857   42439 out.go:204]   - Generating certificates and keys ...
	I0213 18:25:40.291993   42439 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 18:25:40.292088   42439 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 18:25:40.448827   42439 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 18:25:40.628033   42439 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 18:25:40.869566   42439 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 18:25:40.940807   42439 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 18:25:41.032352   42439 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 18:25:41.032517   42439 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 18:25:41.260458   42439 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 18:25:41.260679   42439 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 18:25:41.333320   42439 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 18:25:41.529128   42439 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 18:25:41.597342   42439 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 18:25:41.597443   42439 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 18:25:41.792254   42439 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 18:25:41.835280   42439 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 18:25:41.934746   42439 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 18:25:42.051969   42439 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 18:25:42.052618   42439 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 18:25:42.076749   42439 out.go:204]   - Booting up control plane ...
	I0213 18:25:42.076911   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 18:25:42.077038   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 18:25:42.077173   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 18:25:42.077296   42439 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 18:25:42.077559   42439 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 18:26:22.062630   42439 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 18:26:22.063347   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:26:22.063486   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:26:27.065022   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:26:27.065274   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:26:37.067610   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:26:37.067828   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:26:57.069919   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:26:57.070112   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:27:37.100562   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:27:37.100785   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:27:37.100808   42439 kubeadm.go:322] 
	I0213 18:27:37.100859   42439 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0213 18:27:37.100904   42439 kubeadm.go:322] 		timed out waiting for the condition
	I0213 18:27:37.100913   42439 kubeadm.go:322] 
	I0213 18:27:37.100950   42439 kubeadm.go:322] 	This error is likely caused by:
	I0213 18:27:37.100988   42439 kubeadm.go:322] 		- The kubelet is not running
	I0213 18:27:37.101090   42439 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 18:27:37.101097   42439 kubeadm.go:322] 
	I0213 18:27:37.101208   42439 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 18:27:37.101272   42439 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0213 18:27:37.101318   42439 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0213 18:27:37.101328   42439 kubeadm.go:322] 
	I0213 18:27:37.101442   42439 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 18:27:37.101528   42439 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0213 18:27:37.101535   42439 kubeadm.go:322] 
	I0213 18:27:37.101627   42439 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0213 18:27:37.101690   42439 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0213 18:27:37.101777   42439 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0213 18:27:37.101810   42439 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0213 18:27:37.101819   42439 kubeadm.go:322] 
	I0213 18:27:37.105805   42439 kubeadm.go:322] W0214 02:25:39.677836    1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 18:27:37.105939   42439 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 18:27:37.106023   42439 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 18:27:37.106186   42439 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0213 18:27:37.106294   42439 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 18:27:37.106385   42439 kubeadm.go:322] W0214 02:25:42.058099    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 18:27:37.106467   42439 kubeadm.go:322] W0214 02:25:42.058969    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 18:27:37.106524   42439 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 18:27:37.106592   42439 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 18:27:37.106670   42439 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:25:39.677836    1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:25:42.058099    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:25:42.058969    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:25:39.677836    1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:25:42.058099    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:25:42.058969    1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 18:27:37.106706   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 18:27:37.526096   42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 18:27:37.543464   42439 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 18:27:37.543530   42439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 18:27:37.558194   42439 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 18:27:37.558229   42439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 18:27:37.613132   42439 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 18:27:37.613259   42439 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 18:27:37.847822   42439 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 18:27:37.847901   42439 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 18:27:37.847983   42439 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 18:27:38.013054   42439 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 18:27:38.014546   42439 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 18:27:38.014585   42439 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 18:27:38.079484   42439 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 18:27:38.115748   42439 out.go:204]   - Generating certificates and keys ...
	I0213 18:27:38.115818   42439 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 18:27:38.115929   42439 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 18:27:38.116015   42439 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 18:27:38.116090   42439 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 18:27:38.116186   42439 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 18:27:38.116277   42439 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 18:27:38.116366   42439 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 18:27:38.116406   42439 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 18:27:38.116493   42439 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 18:27:38.116621   42439 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 18:27:38.116670   42439 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 18:27:38.116782   42439 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 18:27:38.327345   42439 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 18:27:38.475374   42439 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 18:27:38.677019   42439 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 18:27:38.825940   42439 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 18:27:38.826786   42439 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 18:27:38.848295   42439 out.go:204]   - Booting up control plane ...
	I0213 18:27:38.848389   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 18:27:38.848476   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 18:27:38.848554   42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 18:27:38.848659   42439 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 18:27:38.848838   42439 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 18:28:18.848610   42439 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 18:28:18.849281   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:28:18.849486   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:28:23.851163   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:28:23.851384   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:28:33.853397   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:28:33.853530   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:28:53.855874   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:28:53.856120   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:29:33.858556   42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:29:33.858788   42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:29:33.858803   42439 kubeadm.go:322] 
	I0213 18:29:33.858838   42439 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0213 18:29:33.858876   42439 kubeadm.go:322] 		timed out waiting for the condition
	I0213 18:29:33.858884   42439 kubeadm.go:322] 
	I0213 18:29:33.858915   42439 kubeadm.go:322] 	This error is likely caused by:
	I0213 18:29:33.858965   42439 kubeadm.go:322] 		- The kubelet is not running
	I0213 18:29:33.859080   42439 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 18:29:33.859087   42439 kubeadm.go:322] 
	I0213 18:29:33.859201   42439 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 18:29:33.859238   42439 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0213 18:29:33.859270   42439 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0213 18:29:33.859295   42439 kubeadm.go:322] 
	I0213 18:29:33.859424   42439 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 18:29:33.859507   42439 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0213 18:29:33.859520   42439 kubeadm.go:322] 
	I0213 18:29:33.859635   42439 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0213 18:29:33.859694   42439 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0213 18:29:33.859776   42439 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0213 18:29:33.859809   42439 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0213 18:29:33.859816   42439 kubeadm.go:322] 
	I0213 18:29:33.863937   42439 kubeadm.go:322] W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 18:29:33.864075   42439 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 18:29:33.864140   42439 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 18:29:33.864251   42439 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0213 18:29:33.864334   42439 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 18:29:33.864434   42439 kubeadm.go:322] W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 18:29:33.864540   42439 kubeadm.go:322] W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 18:29:33.864608   42439 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 18:29:33.864672   42439 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 18:29:33.864710   42439 kubeadm.go:406] StartCluster complete in 3m54.274127564s
	I0213 18:29:33.864797   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 18:29:33.882606   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.882621   42439 logs.go:278] No container was found matching "kube-apiserver"
	I0213 18:29:33.882688   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 18:29:33.901477   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.901491   42439 logs.go:278] No container was found matching "etcd"
	I0213 18:29:33.901559   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 18:29:33.918997   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.919014   42439 logs.go:278] No container was found matching "coredns"
	I0213 18:29:33.919092   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 18:29:33.936703   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.936716   42439 logs.go:278] No container was found matching "kube-scheduler"
	I0213 18:29:33.936784   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 18:29:33.954643   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.954658   42439 logs.go:278] No container was found matching "kube-proxy"
	I0213 18:29:33.954722   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 18:29:33.971642   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.971657   42439 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 18:29:33.971724   42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 18:29:33.989744   42439 logs.go:276] 0 containers: []
	W0213 18:29:33.989759   42439 logs.go:278] No container was found matching "kindnet"
	I0213 18:29:33.989767   42439 logs.go:123] Gathering logs for kubelet ...
	I0213 18:29:33.989783   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 18:29:34.031751   42439 logs.go:123] Gathering logs for dmesg ...
	I0213 18:29:34.031766   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 18:29:34.051795   42439 logs.go:123] Gathering logs for describe nodes ...
	I0213 18:29:34.051810   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 18:29:34.104430   42439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 18:29:34.104443   42439 logs.go:123] Gathering logs for Docker ...
	I0213 18:29:34.104453   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 18:29:34.125757   42439 logs.go:123] Gathering logs for container status ...
	I0213 18:29:34.125772   42439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 18:29:34.185133   42439 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 18:29:34.185156   42439 out.go:239] * 
	* 
	W0213 18:29:34.185194   42439 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 18:29:34.185215   42439 out.go:239] * 
	* 
	W0213 18:29:34.185840   42439 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 18:29:34.271714   42439 out.go:177] 
	W0213 18:29:34.314627   42439 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0214 02:27:37.603285    4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0214 02:27:38.820504    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0214 02:27:38.821386    4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 18:29:34.314688   42439 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 18:29:34.314712   42439 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 18:29:34.336650   42439 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (115.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-694000 addons enable ingress --alsologtostderr -v=5
E0213 18:29:40.391194   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:30:23.870746   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:30:51.609358   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-694000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m54.905523909s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:29:34.504231   42723 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:29:34.505423   42723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:29:34.505430   42723 out.go:304] Setting ErrFile to fd 2...
	I0213 18:29:34.505446   42723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:29:34.505655   42723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:29:34.506017   42723 mustload.go:65] Loading cluster: ingress-addon-legacy-694000
	I0213 18:29:34.506334   42723 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:29:34.506349   42723 addons.go:597] checking whether the cluster is paused
	I0213 18:29:34.506443   42723 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:29:34.506459   42723 host.go:66] Checking if "ingress-addon-legacy-694000" exists ...
	I0213 18:29:34.506858   42723 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:29:34.558272   42723 ssh_runner.go:195] Run: systemctl --version
	I0213 18:29:34.558370   42723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:29:34.608730   42723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:29:34.703291   42723 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 18:29:34.743114   42723 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0213 18:29:34.764794   42723 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:29:34.764820   42723 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-694000"
	I0213 18:29:34.764834   42723 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-694000"
	I0213 18:29:34.764884   42723 host.go:66] Checking if "ingress-addon-legacy-694000" exists ...
	I0213 18:29:34.765477   42723 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:29:34.838702   42723 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0213 18:29:34.859893   42723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0213 18:29:34.881703   42723 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0213 18:29:34.902654   42723 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0213 18:29:34.924127   42723 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 18:29:34.924150   42723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0213 18:29:34.924248   42723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:29:34.975087   42723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:29:35.092451   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:35.153243   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:35.153270   42723 retry.go:31] will retry after 252.383378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:35.407859   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:35.465539   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:35.465556   42723 retry.go:31] will retry after 316.200009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:35.782106   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:35.849321   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:35.849365   42723 retry.go:31] will retry after 717.065249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:36.567836   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:36.631108   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:36.631129   42723 retry.go:31] will retry after 812.499947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:37.444116   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:37.506326   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:37.506353   42723 retry.go:31] will retry after 1.792242487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:39.299016   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:39.360084   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:39.360106   42723 retry.go:31] will retry after 2.038811849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:41.399740   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:41.454008   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:41.454027   42723 retry.go:31] will retry after 4.232098157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:45.686269   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:45.750932   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:45.750953   42723 retry.go:31] will retry after 5.080551346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:50.832474   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:50.895919   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:50.895937   42723 retry.go:31] will retry after 6.216893836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:57.113111   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:29:57.172762   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:29:57.172779   42723 retry.go:31] will retry after 9.647674481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:06.820955   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:30:06.877105   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:06.877130   42723 retry.go:31] will retry after 21.365903152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:28.245166   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:30:28.301921   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:28.301940   42723 retry.go:31] will retry after 15.489685711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:43.792462   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:30:43.853516   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:30:43.853534   42723 retry.go:31] will retry after 45.310089246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:29.165519   42723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 18:31:29.223290   42723 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:29.223316   42723 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-694000"
	I0213 18:31:29.245062   42723 out.go:177] * Verifying ingress addon...
	I0213 18:31:29.267101   42723 out.go:177] 
	W0213 18:31:29.287731   42723 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-694000" does not exist: client config: context "ingress-addon-legacy-694000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-694000" does not exist: client config: context "ingress-addon-legacy-694000" does not exist]
	W0213 18:31:29.287769   42723 out.go:239] * 
	* 
	W0213 18:31:29.300953   42723 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 18:31:29.322963   42723 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-694000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-694000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505",
	        "Created": "2024-02-14T02:25:20.762737915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60140,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:25:20.975127891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hosts",
	        "LogPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505-json.log",
	        "Name": "/ingress-addon-legacy-694000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-694000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-694000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-694000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-694000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-694000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eb7430073ff5769aae8ed2015e3a79d7432b6bef206badf82d3994d4e5ac572",
	            "SandboxKey": "/var/run/docker/netns/6eb7430073ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53836"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53837"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-694000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ae6d692e971",
	                        "ingress-addon-legacy-694000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "e634f9fcee264da7a430c6cd8672036010defce8e68863e3a32cbd2fd1b55adb",
	                    "EndpointID": "ff4b291a1df9dd37181fbb03af4cb7ccf2c0ee7916ee0ff98605c94565a23623",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-694000",
	                        "3ae6d692e971"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000: exit status 6 (400.24072ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 18:31:29.783847   42806 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-694000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-694000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (115.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-694000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-694000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m38.734053848s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:31:29.857495   42816 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:31:29.858342   42816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:31:29.858348   42816 out.go:304] Setting ErrFile to fd 2...
	I0213 18:31:29.858352   42816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:31:29.858552   42816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:31:29.858904   42816 mustload.go:65] Loading cluster: ingress-addon-legacy-694000
	I0213 18:31:29.859183   42816 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:31:29.859197   42816 addons.go:597] checking whether the cluster is paused
	I0213 18:31:29.859283   42816 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:31:29.859299   42816 host.go:66] Checking if "ingress-addon-legacy-694000" exists ...
	I0213 18:31:29.859690   42816 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:31:29.909719   42816 ssh_runner.go:195] Run: systemctl --version
	I0213 18:31:29.909839   42816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:31:29.959864   42816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:31:30.054012   42816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 18:31:30.093692   42816 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0213 18:31:30.115545   42816 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 18:31:30.115563   42816 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-694000"
	I0213 18:31:30.115572   42816 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-694000"
	I0213 18:31:30.115616   42816 host.go:66] Checking if "ingress-addon-legacy-694000" exists ...
	I0213 18:31:30.115959   42816 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
	I0213 18:31:30.187255   42816 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0213 18:31:30.229378   42816 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0213 18:31:30.250337   42816 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 18:31:30.250356   42816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0213 18:31:30.250444   42816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
	I0213 18:31:30.300851   42816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
	I0213 18:31:30.417565   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:30.498516   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:30.498552   42816 retry.go:31] will retry after 234.723483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:30.733785   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:30.793627   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:30.793644   42816 retry.go:31] will retry after 231.394856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:31.025695   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:31.096271   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:31.096291   42816 retry.go:31] will retry after 322.133009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:31.420685   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:31.477224   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:31.477243   42816 retry.go:31] will retry after 1.231466859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:32.709011   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:32.766628   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:32.766650   42816 retry.go:31] will retry after 1.324849204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:34.093323   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:34.157183   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:34.157200   42816 retry.go:31] will retry after 1.217227934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:35.375049   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:35.436951   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:35.436975   42816 retry.go:31] will retry after 3.162220286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:38.601429   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:38.657443   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:38.657462   42816 retry.go:31] will retry after 2.772048807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:41.431254   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:41.500491   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:41.500508   42816 retry.go:31] will retry after 9.225020616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:50.725765   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:50.784989   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:50.785012   42816 retry.go:31] will retry after 8.372264418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:59.157896   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:31:59.218315   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:31:59.218331   42816 retry.go:31] will retry after 16.522924875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:32:15.741678   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:32:15.798780   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:32:15.798797   42816 retry.go:31] will retry after 15.00132381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:32:30.802315   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:32:30.859685   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:32:30.859702   42816 retry.go:31] will retry after 37.518146713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:33:08.378738   42816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 18:33:08.439993   42816 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 18:33:08.460786   42816 out.go:177] 
	W0213 18:33:08.482491   42816 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0213 18:33:08.482530   42816 out.go:239] * 
	* 
	W0213 18:33:08.488644   42816 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 18:33:08.509755   42816 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-694000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-694000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505",
	        "Created": "2024-02-14T02:25:20.762737915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60140,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:25:20.975127891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hosts",
	        "LogPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505-json.log",
	        "Name": "/ingress-addon-legacy-694000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-694000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-694000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-694000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-694000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-694000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eb7430073ff5769aae8ed2015e3a79d7432b6bef206badf82d3994d4e5ac572",
	            "SandboxKey": "/var/run/docker/netns/6eb7430073ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53836"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53837"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-694000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ae6d692e971",
	                        "ingress-addon-legacy-694000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "e634f9fcee264da7a430c6cd8672036010defce8e68863e3a32cbd2fd1b55adb",
	                    "EndpointID": "ff4b291a1df9dd37181fbb03af4cb7ccf2c0ee7916ee0ff98605c94565a23623",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-694000",
	                        "3ae6d692e971"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000: exit status 6 (399.354843ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 18:33:08.970664   42866 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-694000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-694000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-694000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-694000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505",
	        "Created": "2024-02-14T02:25:20.762737915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60140,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:25:20.975127891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/hosts",
	        "LogPath": "/var/lib/docker/containers/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505/3ae6d692e971a0d1e4c7adbb0eb37db018c736bfd11a310dbe1af47910536505-json.log",
	        "Name": "/ingress-addon-legacy-694000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-694000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-694000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f83bc3889fe09fb176c0c3859451b986f51ec18101c9dd1317272a81f3de24e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-694000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-694000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-694000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-694000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eb7430073ff5769aae8ed2015e3a79d7432b6bef206badf82d3994d4e5ac572",
	            "SandboxKey": "/var/run/docker/netns/6eb7430073ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53836"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53837"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-694000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ae6d692e971",
	                        "ingress-addon-legacy-694000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "e634f9fcee264da7a430c6cd8672036010defce8e68863e3a32cbd2fd1b55adb",
	                    "EndpointID": "ff4b291a1df9dd37181fbb03af4cb7ccf2c0ee7916ee0ff98605c94565a23623",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-694000",
	                        "3ae6d692e971"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-694000 -n ingress-addon-legacy-694000: exit status 6 (409.402466ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 18:33:09.429717   42878 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-694000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-694000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestSkaffold (318.74s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3475599156 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3475599156 version: (1.761938435s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-163000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-163000 --memory=2600 --driver=docker : (22.53135906s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3475599156 run --minikube-profile skaffold-163000 --kube-context skaffold-163000 --status-check=true --port-forward=false --interactive=false
E0213 18:49:40.359090   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:50:23.837682   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:52:43.403467   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3475599156 run --minikube-profile skaffold-163000 --kube-context skaffold-163000 --status-check=true --port-forward=false --interactive=false: signal: killed (4m42.058007226s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-163000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 250B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for gcr.io/distroless/base:latest
	#3 DONE 2.4s
	
	#4 [1/1] FROM gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.1s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.1s
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.9s done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 1.2s done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.3s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.4s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.4s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.6s done
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.6s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.6s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.8s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.8s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.6s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 2.1s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 2.1s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 2.1s done
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.2s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.3s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 1.05MB / 5.85MB 2.8s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 2.9s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 4.19MB / 5.85MB 3.0s
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 3.5s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.5s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 3.6s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 3.8s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 1.0s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 430B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 5.24MB / 55.03MB 0.2s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s done
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 7.34MB / 10.88MB 0.2s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 0B / 54.58MB 0.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 23.07MB / 55.03MB 0.3s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 0B / 85.98MB 0.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 35.65MB / 55.03MB 0.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 0.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 0.6s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 10.49MB / 54.58MB 0.6s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 14.68MB / 54.58MB 0.7s
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0B / 141.98MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 34.60MB / 54.58MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 28.67MB / 141.98MB 0.9s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 40.89MB / 54.58MB 1.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 39.85MB / 141.98MB 1.0s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 47.19MB / 54.58MB 1.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 52.43MB / 141.98MB 1.1s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 51.38MB / 54.58MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 1.3s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 1.3s done
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 1.4s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 1.4s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 3.8s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 5.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 6.2s
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.6s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 10.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 11.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 15.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 16.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 20.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 21.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 25.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 26.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 30.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 31.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 35.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 36.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 40.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 41.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 46.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 46.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 51.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 51.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 56.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 57.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 61.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 62.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 66.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 67.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 71.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 72.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 76.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 77.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 81.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 82.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 86.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 87.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 91.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 92.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 96.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 98.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 101.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 103.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 106.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 108.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 112.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 113.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 117.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 118.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 122.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 123.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 127.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 128.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 132.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 133.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 137.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 138.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 142.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 143.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 147.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 148.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 152.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 153.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 157.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 159.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 162.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 164.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 167.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 169.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 172.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 174.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 177.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 179.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 183.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 184.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 188.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 189.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 193.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 194.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 198.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 199.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 203.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 204.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 25.17MB / 85.98MB 205.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 33.55MB / 85.98MB 205.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 48.23MB / 85.98MB 205.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 69.21MB / 141.98MB 205.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 72.35MB / 85.98MB 206.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 80.74MB / 141.98MB 206.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 112.20MB / 141.98MB 206.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 132.12MB / 141.98MB 206.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 206.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 206.9s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.6s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.1s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 8.9s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 219.2s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.2s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 22.7s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.0s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:3f3895b58bdf52fb0a95598847b4196c1375c12732ee979b2c3880602d6a7ee3 done
	#13 naming to docker.io/library/leeroy-app:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-app] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 0.3s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 565B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-13T18:49:36-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-13T18:49:36-08:00" level=error msg="Please run:"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="to obtain new credentials."
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
skaffold_test.go:107: error running skaffold: signal: killed

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-163000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 250B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for gcr.io/distroless/base:latest
	#3 DONE 2.4s
	
	#4 [1/1] FROM gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.1s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.1s
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.9s done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 1.2s done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.3s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.4s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.4s
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.6s done
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.6s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.6s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.8s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.8s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.6s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 2.1s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 2.1s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 2.1s done
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.2s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.3s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 1.05MB / 5.85MB 2.8s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 2.9s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 4.19MB / 5.85MB 3.0s
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.0s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 3.5s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.5s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 3.6s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 3.8s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 1.0s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 430B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 5.24MB / 55.03MB 0.2s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s done
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 7.34MB / 10.88MB 0.2s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 0B / 54.58MB 0.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 23.07MB / 55.03MB 0.3s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 0B / 85.98MB 0.3s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 35.65MB / 55.03MB 0.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 0.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 0.6s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 10.49MB / 54.58MB 0.6s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 14.68MB / 54.58MB 0.7s
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0B / 141.98MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 34.60MB / 54.58MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 28.67MB / 141.98MB 0.9s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 40.89MB / 54.58MB 1.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 39.85MB / 141.98MB 1.0s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 47.19MB / 54.58MB 1.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 52.43MB / 141.98MB 1.1s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 51.38MB / 54.58MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 1.3s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 1.3s done
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 1.4s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 1.4s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 3.8s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 5.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 6.2s
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.6s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 10.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 11.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 15.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 16.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 20.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 21.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 25.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 26.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 30.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 31.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 35.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 36.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 40.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 41.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 46.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 46.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 51.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 51.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 56.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 57.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 61.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 62.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 66.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 67.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 71.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 72.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 76.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 77.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 81.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 82.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 86.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 87.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 91.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 92.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 96.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 98.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 101.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 103.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 106.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 108.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 112.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 113.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 117.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 118.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 122.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 123.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 127.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 128.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 132.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 133.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 137.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 138.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 142.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 143.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 147.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 148.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 152.4s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 153.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 157.6s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 159.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 162.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 164.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 167.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 169.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 172.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 174.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 177.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 179.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 183.0s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 184.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 188.1s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 189.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 193.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 194.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 198.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 199.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 15.73MB / 85.98MB 203.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 204.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 25.17MB / 85.98MB 205.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 33.55MB / 85.98MB 205.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 48.23MB / 85.98MB 205.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 69.21MB / 141.98MB 205.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 72.35MB / 85.98MB 206.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 206.2s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 80.74MB / 141.98MB 206.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 112.20MB / 141.98MB 206.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 132.12MB / 141.98MB 206.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 206.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 206.9s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.6s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.1s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 8.9s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 219.2s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.2s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 22.7s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.0s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:3f3895b58bdf52fb0a95598847b4196c1375c12732ee979b2c3880602d6a7ee3 done
	#13 naming to docker.io/library/leeroy-app:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-app] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 0.3s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 565B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-13T18:49:36-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-13T18:49:36-08:00" level=error msg="Please run:"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="to obtain new credentials."
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-13T18:49:36-08:00" level=error
	time="2024-02-13T18:49:36-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2024-02-13 18:54:13.51696 -0800 PST m=+2792.491085266
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-163000
helpers_test.go:235: (dbg) docker inspect skaffold-163000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f",
	        "Created": "2024-02-14T02:49:13.062180004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:49:13.267859472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f/hosts",
	        "LogPath": "/var/lib/docker/containers/e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f/e4b3a6a73d2007e71cb45c2893ba3341bc41b27219976cf445b81144df648f4f-json.log",
	        "Name": "/skaffold-163000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "skaffold-163000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-163000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/618d1c5b0905cbd578f2a48cc8a9eab3b861d0e10733fa1edf36b10460204ad5-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/618d1c5b0905cbd578f2a48cc8a9eab3b861d0e10733fa1edf36b10460204ad5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/618d1c5b0905cbd578f2a48cc8a9eab3b861d0e10733fa1edf36b10460204ad5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/618d1c5b0905cbd578f2a48cc8a9eab3b861d0e10733fa1edf36b10460204ad5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "skaffold-163000",
	                "Source": "/var/lib/docker/volumes/skaffold-163000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-163000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-163000",
	                "name.minikube.sigs.k8s.io": "skaffold-163000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f003f346498437096b4b30c2565468296bc5e505fa7d5921fc40d7f02b9f09d2",
	            "SandboxKey": "/var/run/docker/netns/f003f3464984",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54814"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-163000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e4b3a6a73d20",
	                        "skaffold-163000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "8429878684dd42a67261202bfbf304e57e2679c8b9c574723558046b886ae142",
	                    "EndpointID": "e51e63410e5ca6108dfde98e25e6106d02173531ee4f5adfeb63fde1b31276a0",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "skaffold-163000",
	                        "e4b3a6a73d20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-163000 -n skaffold-163000
helpers_test.go:244: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p skaffold-163000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p skaffold-163000 logs -n 25: (2.476328974s)
helpers_test.go:252: TestSkaffold logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile        |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	| start      | -p multinode-315000-m02        | multinode-315000-m02  | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| start      | -p multinode-315000-m03        | multinode-315000-m03  | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST | 13 Feb 24 18:44 PST |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| node       | add -p multinode-315000        | multinode-315000      | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST |                     |
	| delete     | -p multinode-315000-m03        | multinode-315000-m03  | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST | 13 Feb 24 18:44 PST |
	| delete     | -p multinode-315000            | multinode-315000      | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST | 13 Feb 24 18:44 PST |
	| start      | -p test-preload-327000         | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:44 PST | 13 Feb 24 18:46 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr              |                       |          |         |                     |                     |
	|            | --wait=true --preload=false    |                       |          |         |                     |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.4   |                       |          |         |                     |                     |
	| image      | test-preload-327000 image pull | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:46 PST | 13 Feb 24 18:46 PST |
	|            | gcr.io/k8s-minikube/busybox    |                       |          |         |                     |                     |
	| stop       | -p test-preload-327000         | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:46 PST | 13 Feb 24 18:46 PST |
	| start      | -p test-preload-327000         | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:46 PST | 13 Feb 24 18:47 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                       |          |         |                     |                     |
	|            | --wait=true --driver=docker    |                       |          |         |                     |                     |
	| image      | test-preload-327000 image list | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST | 13 Feb 24 18:47 PST |
	| delete     | -p test-preload-327000         | test-preload-327000   | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST | 13 Feb 24 18:47 PST |
	| start      | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST | 13 Feb 24 18:47 PST |
	|            | --memory=2048 --driver=docker  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:47 PST | 13 Feb 24 18:47 PST |
	|            | --cancel-scheduled             |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:48 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:48 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:48 PST | 13 Feb 24 18:48 PST |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| delete     | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 18:48 PST | 13 Feb 24 18:49 PST |
	| start      | -p skaffold-163000             | skaffold-163000       | jenkins  | v1.32.0 | 13 Feb 24 18:49 PST | 13 Feb 24 18:49 PST |
	|            | --memory=2600 --driver=docker  |                       |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-163000       | skaffold | v1.32.0 | 13 Feb 24 18:49 PST | 13 Feb 24 18:49 PST |
	|            | skaffold-163000                |                       |          |         |                     |                     |
	|            | --user=skaffold                |                       |          |         |                     |                     |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 18:49:08
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 18:49:08.914829   46996 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:49:08.915090   46996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:49:08.915093   46996 out.go:304] Setting ErrFile to fd 2...
	I0213 18:49:08.915097   46996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:49:08.915279   46996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:49:08.916715   46996 out.go:298] Setting JSON to false
	I0213 18:49:08.940130   46996 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":15807,"bootTime":1707863141,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:49:08.940236   46996 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:49:08.968865   46996 out.go:177] * [skaffold-163000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:49:09.046109   46996 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 18:49:09.010686   46996 notify.go:220] Checking for updates...
	I0213 18:49:09.088776   46996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:49:09.110891   46996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:49:09.131701   46996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:49:09.152957   46996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 18:49:09.173917   46996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 18:49:09.196178   46996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:49:09.252446   46996 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:49:09.252596   46996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:49:09.356831   46996 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:49:09.346859457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:49:09.399470   46996 out.go:177] * Using the docker driver based on user configuration
	I0213 18:49:09.420476   46996 start.go:298] selected driver: docker
	I0213 18:49:09.420488   46996 start.go:902] validating driver "docker" against <nil>
	I0213 18:49:09.420496   46996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 18:49:09.425889   46996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:49:09.528914   46996 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:49:09.518653416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:49:09.529075   46996 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:49:09.529244   46996 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 18:49:09.550242   46996 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 18:49:09.571639   46996 cni.go:84] Creating CNI manager for ""
	I0213 18:49:09.571680   46996 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 18:49:09.571724   46996 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 18:49:09.571747   46996 start_flags.go:321] config:
	{Name:skaffold-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:49:09.593138   46996 out.go:177] * Starting control plane node skaffold-163000 in cluster skaffold-163000
	I0213 18:49:09.636316   46996 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:49:09.680140   46996 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:49:09.701224   46996 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:49:09.701248   46996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:49:09.701257   46996 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 18:49:09.701263   46996 cache.go:56] Caching tarball of preloaded images
	I0213 18:49:09.701370   46996 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 18:49:09.701376   46996 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 18:49:09.702203   46996 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/config.json ...
	I0213 18:49:09.702260   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/config.json: {Name:mk2f8b016f10d18d7e593d971ad656b2f0dc17a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:09.751119   46996 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 18:49:09.751127   46996 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 18:49:09.751143   46996 cache.go:194] Successfully downloaded all kic artifacts
	I0213 18:49:09.751176   46996 start.go:365] acquiring machines lock for skaffold-163000: {Name:mk2b2e0f8d938b60c0e695959732a42d19e10be9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 18:49:09.751329   46996 start.go:369] acquired machines lock for "skaffold-163000" in 132.774µs
	I0213 18:49:09.751353   46996 start.go:93] Provisioning new machine with config: &{Name:skaffold-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-163000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 18:49:09.751415   46996 start.go:125] createHost starting for "" (driver="docker")
	I0213 18:49:09.794359   46996 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0213 18:49:09.794688   46996 start.go:159] libmachine.API.Create for "skaffold-163000" (driver="docker")
	I0213 18:49:09.794807   46996 client.go:168] LocalClient.Create starting
	I0213 18:49:09.795011   46996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem
	I0213 18:49:09.795098   46996 main.go:141] libmachine: Decoding PEM data...
	I0213 18:49:09.795121   46996 main.go:141] libmachine: Parsing certificate...
	I0213 18:49:09.795201   46996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem
	I0213 18:49:09.795279   46996 main.go:141] libmachine: Decoding PEM data...
	I0213 18:49:09.795292   46996 main.go:141] libmachine: Parsing certificate...
	I0213 18:49:09.796040   46996 cli_runner.go:164] Run: docker network inspect skaffold-163000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 18:49:09.851225   46996 cli_runner.go:211] docker network inspect skaffold-163000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 18:49:09.851352   46996 network_create.go:281] running [docker network inspect skaffold-163000] to gather additional debugging logs...
	I0213 18:49:09.851376   46996 cli_runner.go:164] Run: docker network inspect skaffold-163000
	W0213 18:49:09.917231   46996 cli_runner.go:211] docker network inspect skaffold-163000 returned with exit code 1
	I0213 18:49:09.917248   46996 network_create.go:284] error running [docker network inspect skaffold-163000]: docker network inspect skaffold-163000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network skaffold-163000 not found
	I0213 18:49:09.917265   46996 network_create.go:286] output of [docker network inspect skaffold-163000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network skaffold-163000 not found
	
	** /stderr **
	I0213 18:49:09.917415   46996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 18:49:09.977919   46996 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 18:49:09.978309   46996 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00232ec20}
	I0213 18:49:09.978325   46996 network_create.go:124] attempt to create docker network skaffold-163000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 18:49:09.978409   46996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-163000 skaffold-163000
	W0213 18:49:10.032759   46996 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-163000 skaffold-163000 returned with exit code 1
	W0213 18:49:10.032782   46996 network_create.go:149] failed to create docker network skaffold-163000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-163000 skaffold-163000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 18:49:10.032798   46996 network_create.go:116] failed to create docker network skaffold-163000 192.168.58.0/24, will retry: subnet is taken
	I0213 18:49:10.034259   46996 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 18:49:10.034583   46996 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a5af0}
	I0213 18:49:10.034599   46996 network_create.go:124] attempt to create docker network skaffold-163000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 18:49:10.034662   46996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-163000 skaffold-163000
	I0213 18:49:10.121110   46996 network_create.go:108] docker network skaffold-163000 192.168.67.0/24 created
	I0213 18:49:10.121145   46996 kic.go:121] calculated static IP "192.168.67.2" for the "skaffold-163000" container
	I0213 18:49:10.121253   46996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 18:49:10.171923   46996 cli_runner.go:164] Run: docker volume create skaffold-163000 --label name.minikube.sigs.k8s.io=skaffold-163000 --label created_by.minikube.sigs.k8s.io=true
	I0213 18:49:10.224185   46996 oci.go:103] Successfully created a docker volume skaffold-163000
	I0213 18:49:10.224301   46996 cli_runner.go:164] Run: docker run --rm --name skaffold-163000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-163000 --entrypoint /usr/bin/test -v skaffold-163000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 18:49:10.622178   46996 oci.go:107] Successfully prepared a docker volume skaffold-163000
	I0213 18:49:10.622205   46996 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:49:10.622219   46996 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 18:49:10.622317   46996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-163000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 18:49:12.906520   46996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-163000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.28415093s)
	I0213 18:49:12.906541   46996 kic.go:203] duration metric: took 2.284360 seconds to extract preloaded images to volume
	I0213 18:49:12.906654   46996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 18:49:13.009493   46996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-163000 --name skaffold-163000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-163000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-163000 --network skaffold-163000 --ip 192.168.67.2 --volume skaffold-163000:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 18:49:13.275954   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Running}}
	I0213 18:49:13.333630   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:13.392391   46996 cli_runner.go:164] Run: docker exec skaffold-163000 stat /var/lib/dpkg/alternatives/iptables
	I0213 18:49:13.518410   46996 oci.go:144] the created container "skaffold-163000" has a running status.
	I0213 18:49:13.518475   46996 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa...
	I0213 18:49:13.876497   46996 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 18:49:13.962034   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:14.013535   46996 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 18:49:14.013555   46996 kic_runner.go:114] Args: [docker exec --privileged skaffold-163000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 18:49:14.104174   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:14.155251   46996 machine.go:88] provisioning docker machine ...
	I0213 18:49:14.155291   46996 ubuntu.go:169] provisioning hostname "skaffold-163000"
	I0213 18:49:14.155393   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:14.206089   46996 main.go:141] libmachine: Using SSH client type: native
	I0213 18:49:14.206398   46996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54812 <nil> <nil>}
	I0213 18:49:14.206409   46996 main.go:141] libmachine: About to run SSH command:
	sudo hostname skaffold-163000 && echo "skaffold-163000" | sudo tee /etc/hostname
	I0213 18:49:14.365210   46996 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-163000
	
	I0213 18:49:14.365284   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:14.417802   46996 main.go:141] libmachine: Using SSH client type: native
	I0213 18:49:14.418079   46996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54812 <nil> <nil>}
	I0213 18:49:14.418088   46996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 18:49:14.557944   46996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 18:49:14.557968   46996 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 18:49:14.557987   46996 ubuntu.go:177] setting up certificates
	I0213 18:49:14.557997   46996 provision.go:83] configureAuth start
	I0213 18:49:14.558071   46996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-163000
	I0213 18:49:14.609765   46996 provision.go:138] copyHostCerts
	I0213 18:49:14.609885   46996 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 18:49:14.609894   46996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 18:49:14.610049   46996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 18:49:14.610277   46996 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 18:49:14.610281   46996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 18:49:14.610363   46996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 18:49:14.610535   46996 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 18:49:14.610538   46996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 18:49:14.610622   46996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 18:49:14.610772   46996 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.skaffold-163000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-163000]
	I0213 18:49:14.670577   46996 provision.go:172] copyRemoteCerts
	I0213 18:49:14.670629   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 18:49:14.670682   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:14.721658   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:14.825279   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 18:49:14.864928   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0213 18:49:14.904287   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 18:49:14.943762   46996 provision.go:86] duration metric: configureAuth took 385.756842ms
	I0213 18:49:14.943772   46996 ubuntu.go:193] setting minikube options for container-runtime
	I0213 18:49:14.943923   46996 config.go:182] Loaded profile config "skaffold-163000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:49:14.943991   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:14.995666   46996 main.go:141] libmachine: Using SSH client type: native
	I0213 18:49:14.995976   46996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54812 <nil> <nil>}
	I0213 18:49:14.995992   46996 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 18:49:15.136773   46996 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 18:49:15.136785   46996 ubuntu.go:71] root file system type: overlay
	I0213 18:49:15.136875   46996 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 18:49:15.136961   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:15.187721   46996 main.go:141] libmachine: Using SSH client type: native
	I0213 18:49:15.188017   46996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54812 <nil> <nil>}
	I0213 18:49:15.188061   46996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 18:49:15.347318   46996 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 18:49:15.347411   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:15.398703   46996 main.go:141] libmachine: Using SSH client type: native
	I0213 18:49:15.398994   46996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54812 <nil> <nil>}
	I0213 18:49:15.399005   46996 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 18:49:16.018260   46996 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 02:49:15.341865851 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 18:49:16.018281   46996 machine.go:91] provisioned docker machine in 1.863043399s
	I0213 18:49:16.018287   46996 client.go:171] LocalClient.Create took 6.223574993s
	I0213 18:49:16.018323   46996 start.go:167] duration metric: libmachine.API.Create for "skaffold-163000" took 6.223737292s
	I0213 18:49:16.018328   46996 start.go:300] post-start starting for "skaffold-163000" (driver="docker")
	I0213 18:49:16.018334   46996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 18:49:16.018528   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 18:49:16.018595   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:16.070363   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:16.173410   46996 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 18:49:16.177573   46996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 18:49:16.177597   46996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 18:49:16.177603   46996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 18:49:16.177606   46996 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 18:49:16.177615   46996 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 18:49:16.177711   46996 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 18:49:16.177904   46996 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 18:49:16.178097   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 18:49:16.193023   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:49:16.233785   46996 start.go:303] post-start completed in 215.452708ms
	I0213 18:49:16.234390   46996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-163000
	I0213 18:49:16.286345   46996 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/config.json ...
	I0213 18:49:16.286820   46996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 18:49:16.286872   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:16.338744   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:16.432437   46996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 18:49:16.437382   46996 start.go:128] duration metric: createHost completed in 6.686061446s
	I0213 18:49:16.437394   46996 start.go:83] releasing machines lock for "skaffold-163000", held for 6.686164823s
	I0213 18:49:16.437489   46996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-163000
	I0213 18:49:16.488529   46996 ssh_runner.go:195] Run: cat /version.json
	I0213 18:49:16.488540   46996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 18:49:16.488595   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:16.488609   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:16.545040   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:16.545056   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:16.747023   46996 ssh_runner.go:195] Run: systemctl --version
	I0213 18:49:16.752191   46996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 18:49:16.757042   46996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 18:49:16.799058   46996 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 18:49:16.799124   46996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 18:49:16.842032   46996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0213 18:49:16.842044   46996 start.go:475] detecting cgroup driver to use...
	I0213 18:49:16.842055   46996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:49:16.842177   46996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:49:16.869867   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 18:49:16.885405   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 18:49:16.901097   46996 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 18:49:16.901162   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 18:49:16.917075   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:49:16.933372   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 18:49:16.949109   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:49:16.964854   46996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 18:49:16.980763   46996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 18:49:16.996549   46996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 18:49:17.011713   46996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 18:49:17.025958   46996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:49:17.086829   46996 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 18:49:17.172979   46996 start.go:475] detecting cgroup driver to use...
	I0213 18:49:17.172996   46996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:49:17.173086   46996 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 18:49:17.192606   46996 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 18:49:17.192690   46996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 18:49:17.212728   46996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:49:17.244644   46996 ssh_runner.go:195] Run: which cri-dockerd
	I0213 18:49:17.249054   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 18:49:17.265022   46996 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 18:49:17.295981   46996 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 18:49:17.361335   46996 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 18:49:17.451043   46996 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 18:49:17.451118   46996 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 18:49:17.480022   46996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:49:17.540151   46996 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 18:49:17.780189   46996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 18:49:17.798541   46996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 18:49:17.815969   46996 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 18:49:17.880249   46996 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 18:49:17.944137   46996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:49:18.004584   46996 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 18:49:18.039750   46996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 18:49:18.056675   46996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:49:18.119060   46996 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 18:49:18.212701   46996 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 18:49:18.212782   46996 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 18:49:18.217291   46996 start.go:543] Will wait 60s for crictl version
	I0213 18:49:18.217337   46996 ssh_runner.go:195] Run: which crictl
	I0213 18:49:18.221379   46996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 18:49:18.273369   46996 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 18:49:18.273473   46996 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:49:18.297257   46996 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:49:18.371496   46996 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 18:49:18.371608   46996 cli_runner.go:164] Run: docker exec -t skaffold-163000 dig +short host.docker.internal
	I0213 18:49:18.486477   46996 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 18:49:18.486570   46996 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 18:49:18.491116   46996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:49:18.508040   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:18.560114   46996 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:49:18.560185   46996 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:49:18.579406   46996 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 18:49:18.579434   46996 docker.go:615] Images already preloaded, skipping extraction
	I0213 18:49:18.579515   46996 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:49:18.597976   46996 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 18:49:18.597991   46996 cache_images.go:84] Images are preloaded, skipping loading
	I0213 18:49:18.598086   46996 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 18:49:18.645025   46996 cni.go:84] Creating CNI manager for ""
	I0213 18:49:18.645036   46996 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 18:49:18.645047   46996 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 18:49:18.645060   46996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-163000 NodeName:skaffold-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 18:49:18.645158   46996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "skaffold-163000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 18:49:18.645217   46996 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=skaffold-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:skaffold-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 18:49:18.645280   46996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 18:49:18.660111   46996 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 18:49:18.660175   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 18:49:18.674636   46996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0213 18:49:18.705286   46996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 18:49:18.734722   46996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0213 18:49:18.764114   46996 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 18:49:18.768649   46996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:49:18.785420   46996 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000 for IP: 192.168.67.2
	I0213 18:49:18.785435   46996 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:18.785625   46996 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 18:49:18.785688   46996 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 18:49:18.785735   46996 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.key
	I0213 18:49:18.785745   46996 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.crt with IP's: []
	I0213 18:49:18.981466   46996 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.crt ...
	I0213 18:49:18.981474   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.crt: {Name:mkfcc0c33ca6fb283b6bc4e2ef074f8d6f229bee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:18.981779   46996 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.key ...
	I0213 18:49:18.981783   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/client.key: {Name:mkbbeb8f68c00de18ae28712b988ac012b647d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:18.982004   46996 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key.c7fa3a9e
	I0213 18:49:18.982021   46996 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 18:49:19.114956   46996 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt.c7fa3a9e ...
	I0213 18:49:19.114964   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt.c7fa3a9e: {Name:mk5388955b5dcbf3b9b36e5d07cfd90f5700a0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:19.115263   46996 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key.c7fa3a9e ...
	I0213 18:49:19.115269   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key.c7fa3a9e: {Name:mk05bb693efeaf74e32faf3aa14dd54dcb51194f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:19.115473   46996 certs.go:337] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt
	I0213 18:49:19.115645   46996 certs.go:341] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key
	I0213 18:49:19.115802   46996 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.key
	I0213 18:49:19.115817   46996 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.crt with IP's: []
	I0213 18:49:19.313365   46996 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.crt ...
	I0213 18:49:19.313374   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.crt: {Name:mk16871e77e16c04adfaa98035106788d0045f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:19.313675   46996 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.key ...
	I0213 18:49:19.313681   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.key: {Name:mkc7da779dc6cdea7b865011fbfeabf660159d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:19.314090   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 18:49:19.314140   46996 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 18:49:19.314149   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 18:49:19.314179   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 18:49:19.314211   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 18:49:19.314237   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 18:49:19.314294   46996 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:49:19.314833   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 18:49:19.356577   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 18:49:19.396602   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 18:49:19.436917   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/skaffold-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 18:49:19.476667   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 18:49:19.516657   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 18:49:19.556606   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 18:49:19.597283   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 18:49:19.637707   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 18:49:19.677508   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 18:49:19.717520   46996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 18:49:19.757819   46996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 18:49:19.786845   46996 ssh_runner.go:195] Run: openssl version
	I0213 18:49:19.792426   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 18:49:19.807664   46996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:49:19.812029   46996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:49:19.812074   46996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:49:19.818555   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 18:49:19.834361   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 18:49:19.849676   46996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 18:49:19.853872   46996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 18:49:19.853918   46996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 18:49:19.861066   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 18:49:19.876697   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 18:49:19.892592   46996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 18:49:19.896727   46996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 18:49:19.896765   46996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 18:49:19.903606   46996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 18:49:19.919331   46996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 18:49:19.923400   46996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 18:49:19.923443   46996 kubeadm.go:404] StartCluster: {Name:skaffold-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:49:19.923582   46996 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 18:49:19.941140   46996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 18:49:19.956714   46996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 18:49:19.971472   46996 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 18:49:19.971523   46996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 18:49:19.986780   46996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 18:49:19.986843   46996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 18:49:20.035072   46996 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 18:49:20.035115   46996 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 18:49:20.159415   46996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 18:49:20.159582   46996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 18:49:20.159654   46996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 18:49:20.447081   46996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 18:49:20.489343   46996 out.go:204]   - Generating certificates and keys ...
	I0213 18:49:20.489422   46996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 18:49:20.489481   46996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 18:49:20.697509   46996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 18:49:20.905374   46996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 18:49:20.969681   46996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 18:49:21.151737   46996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 18:49:21.261644   46996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 18:49:21.261822   46996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-163000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 18:49:21.420928   46996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 18:49:21.421130   46996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-163000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 18:49:21.601292   46996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 18:49:21.788804   46996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 18:49:21.867831   46996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 18:49:21.867888   46996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 18:49:21.989668   46996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 18:49:22.263564   46996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 18:49:22.391668   46996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 18:49:22.575209   46996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 18:49:22.576124   46996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 18:49:22.578480   46996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 18:49:22.600153   46996 out.go:204]   - Booting up control plane ...
	I0213 18:49:22.600217   46996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 18:49:22.600286   46996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 18:49:22.600347   46996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 18:49:22.600431   46996 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 18:49:22.600499   46996 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 18:49:22.600528   46996 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 18:49:22.664245   46996 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 18:49:28.166636   46996 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502169 seconds
	I0213 18:49:28.166790   46996 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 18:49:28.176595   46996 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 18:49:28.691942   46996 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 18:49:28.692120   46996 kubeadm.go:322] [mark-control-plane] Marking the node skaffold-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 18:49:29.199424   46996 kubeadm.go:322] [bootstrap-token] Using token: z07ma5.b01g3s56t5oaux9o
	I0213 18:49:29.236912   46996 out.go:204]   - Configuring RBAC rules ...
	I0213 18:49:29.236994   46996 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 18:49:29.239270   46996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 18:49:29.280923   46996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 18:49:29.283805   46996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 18:49:29.286030   46996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 18:49:29.288217   46996 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 18:49:29.296127   46996 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 18:49:29.431654   46996 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 18:49:29.706164   46996 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 18:49:29.707573   46996 kubeadm.go:322] 
	I0213 18:49:29.707690   46996 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 18:49:29.707710   46996 kubeadm.go:322] 
	I0213 18:49:29.707856   46996 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 18:49:29.707864   46996 kubeadm.go:322] 
	I0213 18:49:29.707940   46996 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 18:49:29.708110   46996 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 18:49:29.708172   46996 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 18:49:29.708180   46996 kubeadm.go:322] 
	I0213 18:49:29.708252   46996 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 18:49:29.708266   46996 kubeadm.go:322] 
	I0213 18:49:29.708308   46996 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 18:49:29.708312   46996 kubeadm.go:322] 
	I0213 18:49:29.708346   46996 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 18:49:29.708406   46996 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 18:49:29.708450   46996 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 18:49:29.708452   46996 kubeadm.go:322] 
	I0213 18:49:29.708512   46996 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 18:49:29.708600   46996 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 18:49:29.708605   46996 kubeadm.go:322] 
	I0213 18:49:29.708722   46996 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z07ma5.b01g3s56t5oaux9o \
	I0213 18:49:29.708848   46996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:37f5d29b605db0b241ae071f2c67ba54403aaba5987d1730ec948834f9a4aa2b \
	I0213 18:49:29.708876   46996 kubeadm.go:322] 	--control-plane 
	I0213 18:49:29.708882   46996 kubeadm.go:322] 
	I0213 18:49:29.708973   46996 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 18:49:29.708978   46996 kubeadm.go:322] 
	I0213 18:49:29.709076   46996 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z07ma5.b01g3s56t5oaux9o \
	I0213 18:49:29.709198   46996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:37f5d29b605db0b241ae071f2c67ba54403aaba5987d1730ec948834f9a4aa2b 
	I0213 18:49:29.714477   46996 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0213 18:49:29.714621   46996 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 18:49:29.714634   46996 cni.go:84] Creating CNI manager for ""
	I0213 18:49:29.714654   46996 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 18:49:29.754829   46996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 18:49:29.814067   46996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 18:49:29.831498   46996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 18:49:29.859881   46996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 18:49:29.859947   46996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 18:49:29.859952   46996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d minikube.k8s.io/name=skaffold-163000 minikube.k8s.io/updated_at=2024_02_13T18_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 18:49:29.869631   46996 ops.go:34] apiserver oom_adj: -16
	I0213 18:49:29.962439   46996 kubeadm.go:1088] duration metric: took 102.548995ms to wait for elevateKubeSystemPrivileges.
	I0213 18:49:29.962450   46996 kubeadm.go:406] StartCluster complete in 10.0391714s
	I0213 18:49:29.962462   46996 settings.go:142] acquiring lock: {Name:mke46562c9f92468d93bd6cd756238f74ba38936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:29.962549   46996 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:49:29.963083   46996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:49:29.963356   46996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 18:49:29.963385   46996 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 18:49:29.963440   46996 addons.go:69] Setting storage-provisioner=true in profile "skaffold-163000"
	I0213 18:49:29.963455   46996 addons.go:234] Setting addon storage-provisioner=true in "skaffold-163000"
	I0213 18:49:29.963459   46996 addons.go:69] Setting default-storageclass=true in profile "skaffold-163000"
	I0213 18:49:29.963476   46996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-163000"
	I0213 18:49:29.963491   46996 config.go:182] Loaded profile config "skaffold-163000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:49:29.963496   46996 host.go:66] Checking if "skaffold-163000" exists ...
	I0213 18:49:29.963773   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:29.963829   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:30.052442   46996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:49:30.028508   46996 addons.go:234] Setting addon default-storageclass=true in "skaffold-163000"
	I0213 18:49:30.052516   46996 host.go:66] Checking if "skaffold-163000" exists ...
	I0213 18:49:30.072789   46996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 18:49:30.073404   46996 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 18:49:30.073409   46996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 18:49:30.073483   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:30.074268   46996 cli_runner.go:164] Run: docker container inspect skaffold-163000 --format={{.State.Status}}
	I0213 18:49:30.140291   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:30.140577   46996 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 18:49:30.140585   46996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 18:49:30.140673   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:30.201253   46996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54812 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/skaffold-163000/id_rsa Username:docker}
	I0213 18:49:30.320881   46996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 18:49:30.336467   46996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 18:49:30.511134   46996 kapi.go:248] "coredns" deployment in "kube-system" namespace and "skaffold-163000" context rescaled to 1 replicas
	I0213 18:49:30.511158   46996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 18:49:30.532821   46996 out.go:177] * Verifying Kubernetes components...
	I0213 18:49:30.554449   46996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 18:49:31.021913   46996 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0213 18:49:31.141921   46996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-163000
	I0213 18:49:31.172959   46996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 18:49:31.214892   46996 addons.go:505] enable addons completed in 1.251518924s: enabled=[storage-provisioner default-storageclass]
	I0213 18:49:31.220854   46996 api_server.go:52] waiting for apiserver process to appear ...
	I0213 18:49:31.220899   46996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 18:49:31.238246   46996 api_server.go:72] duration metric: took 727.08182ms to wait for apiserver process to appear ...
	I0213 18:49:31.238255   46996 api_server.go:88] waiting for apiserver healthz status ...
	I0213 18:49:31.238274   46996 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54811/healthz ...
	I0213 18:49:31.243600   46996 api_server.go:279] https://127.0.0.1:54811/healthz returned 200:
	ok
	I0213 18:49:31.245026   46996 api_server.go:141] control plane version: v1.28.4
	I0213 18:49:31.245034   46996 api_server.go:131] duration metric: took 6.776096ms to wait for apiserver health ...
	I0213 18:49:31.245042   46996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 18:49:31.250453   46996 system_pods.go:59] 5 kube-system pods found
	I0213 18:49:31.250466   46996 system_pods.go:61] "etcd-skaffold-163000" [31928590-f2f0-4d79-b653-f59441b00c6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 18:49:31.250473   46996 system_pods.go:61] "kube-apiserver-skaffold-163000" [819997be-14f2-4612-8ff3-d95e03e66002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 18:49:31.250478   46996 system_pods.go:61] "kube-controller-manager-skaffold-163000" [db809134-ae62-4c4e-a783-5a4bec406c01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 18:49:31.250483   46996 system_pods.go:61] "kube-scheduler-skaffold-163000" [a323efe8-c497-4e68-bb96-5a1df46d9985] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 18:49:31.250488   46996 system_pods.go:61] "storage-provisioner" [6a4328e9-240b-48af-ae60-1ae95984240d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0213 18:49:31.250496   46996 system_pods.go:74] duration metric: took 5.450706ms to wait for pod list to return data ...
	I0213 18:49:31.250500   46996 kubeadm.go:581] duration metric: took 739.339298ms to wait for : map[apiserver:true system_pods:true] ...
	I0213 18:49:31.250506   46996 node_conditions.go:102] verifying NodePressure condition ...
	I0213 18:49:31.253284   46996 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 18:49:31.253292   46996 node_conditions.go:123] node cpu capacity is 12
	I0213 18:49:31.253306   46996 node_conditions.go:105] duration metric: took 2.795608ms to run NodePressure ...
	I0213 18:49:31.253312   46996 start.go:228] waiting for startup goroutines ...
	I0213 18:49:31.253316   46996 start.go:233] waiting for cluster config update ...
	I0213 18:49:31.253325   46996 start.go:242] writing updated cluster config ...
	I0213 18:49:31.253630   46996 ssh_runner.go:195] Run: rm -f paused
	I0213 18:49:31.296980   46996 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 18:49:31.321078   46996 out.go:177] * Done! kubectl is now configured to use "skaffold-163000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 02:49:17 skaffold-163000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 02:49:18 skaffold-163000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Start docker client with request timeout 0s"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Loaded network plugin cni"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Docker Info: &{ID:805c65d1-1669-4416-9d9b-b826595c0e1b Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2024-02-14T02:49:18.201675273Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.6.12-linuxkit OperatingSystem:Ubun
tu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0000cf8f0 NCPU:12 MemTotal:6213296128 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:skaffold-163000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: De
faultAddressPools:[] Warnings:[]}"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 14 02:49:18 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:18Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 14 02:49:18 skaffold-163000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 14 02:49:23 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/46f0a069f80946e8736a568d2214ed2493cd094f6aa2f3a717586d77177dbec3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:23 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/299aeb6e06a2b6bcc93f0fb2c41104d6813d78f5d3ffaa819ba440359d957e42/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:23 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77c364a3254ad25addbbafe37b7084538c60429d618c8402233cdf14f8811934/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:23 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/436eb6a40a0dc7378552d63e0923f280fa2fb593730ee6ccbc03eebf7e30e941/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:42 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/48f31cce34f761813a3bf71e1a6ccfffd2e1fbe2f9b034e25c2ab796384f10b1/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:43 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fcc7e1cc2adf4d7e8af081e561d4d3ceeedd4df3a79263726479e05e3630722/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:43 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/85834a866ea168de88805e2a63b829ad9265ca341971ba78df42423f32842e82/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 02:49:46 skaffold-163000 dockerd[1062]: time="2024-02-14T02:49:46.004701939Z" level=warning msg="no trace recorder found, skipping"
	Feb 14 02:49:50 skaffold-163000 cri-dockerd[1278]: time="2024-02-14T02:49:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Feb 14 02:50:13 skaffold-163000 dockerd[1062]: time="2024-02-14T02:50:13.024588823Z" level=info msg="ignoring event" container=2d4114038d9a439f6880825196e84d3d53957a1dd45d68920d22fae410dafd04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 02:53:49 skaffold-163000 dockerd[1062]: time="2024-02-14T02:53:49.910314986Z" level=warning msg="no trace recorder found, skipping"
	Feb 14 02:54:13 skaffold-163000 dockerd[1062]: time="2024-02-14T02:54:13.878917784Z" level=warning msg="no trace recorder found, skipping"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e31c065aa5494       6e38f40d628db       4 minutes ago       Running             storage-provisioner       1                   48f31cce34f76       storage-provisioner
	453d98ff7928a       ead0a4a53df89       4 minutes ago       Running             coredns                   0                   85834a866ea16       coredns-5dd5756b68-zhnb4
	f0238bd316c3d       83f6cc407eed8       4 minutes ago       Running             kube-proxy                0                   6fcc7e1cc2adf       kube-proxy-7nrc7
	2d4114038d9a4       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   48f31cce34f76       storage-provisioner
	0fc9fc4c8ab80       7fe0e6f37db33       4 minutes ago       Running             kube-apiserver            0                   436eb6a40a0dc       kube-apiserver-skaffold-163000
	90ee06db9b764       d058aa5ab969c       4 minutes ago       Running             kube-controller-manager   0                   46f0a069f8094       kube-controller-manager-skaffold-163000
	1b9d2564db1b7       73deb9a3f7025       4 minutes ago       Running             etcd                      0                   77c364a3254ad       etcd-skaffold-163000
	02bd1a36754d9       e3db313c6dbc0       4 minutes ago       Running             kube-scheduler            0                   299aeb6e06a2b       kube-scheduler-skaffold-163000
	
	
	==> coredns [453d98ff7928] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47465 - 36501 "HINFO IN 4057206021487685103.7213929758727575871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011203456s
	
	
	==> describe nodes <==
	Name:               skaffold-163000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-163000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d
	                    minikube.k8s.io/name=skaffold-163000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T18_49_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 02:49:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-163000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 02:54:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 02:54:04 +0000   Wed, 14 Feb 2024 02:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 02:54:04 +0000   Wed, 14 Feb 2024 02:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 02:54:04 +0000   Wed, 14 Feb 2024 02:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 02:54:04 +0000   Wed, 14 Feb 2024 02:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    skaffold-163000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067672Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067672Ki
	  pods:               110
	System Info:
	  Machine ID:                 432eac06ec524aadb5204fca8634369e
	  System UUID:                432eac06ec524aadb5204fca8634369e
	  Boot ID:                    f9e2bb32-14d2-464f-a920-a74ec4f29d93
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zhnb4                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     4m33s
	  kube-system                 etcd-skaffold-163000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-apiserver-skaffold-163000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-controller-manager-skaffold-163000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-7nrc7                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-skaffold-163000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (6%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node skaffold-163000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node skaffold-163000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node skaffold-163000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet          Node skaffold-163000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet          Node skaffold-163000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet          Node skaffold-163000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m46s                  kubelet          Node skaffold-163000 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m36s                  kubelet          Node skaffold-163000 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node skaffold-163000 event: Registered Node skaffold-163000 in Controller
	
	
	==> dmesg <==
	[  +0.000002] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.003002] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.000494] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.001891] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.004184] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.003898] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.004036] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.004345] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.003479] virtio-pci 0000:00:0f.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0f.0: PCI INT A: no GSI
	[  +0.002982] virtio-pci 0000:00:10.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:10.0: PCI INT A: no GSI
	[  +0.009616] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.024554] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.224176] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.023090] fakeowner: loading out-of-tree module taints kernel.
	[  +0.021835] netlink: 'init': attribute type 22 has an invalid length.
	[Feb14 02:09] systemd[1534]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [1b9d2564db1b] <==
	{"level":"info","ts":"2024-02-14T02:49:24.137102Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T02:49:24.137444Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T02:49:24.523701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T02:49:24.523774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T02:49:24.523794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-02-14T02:49:24.523809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T02:49:24.523815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-14T02:49:24.523821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-02-14T02:49:24.523826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-14T02:49:24.524929Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:49:24.525929Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:skaffold-163000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T02:49:24.525992Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:49:24.526014Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:49:24.526614Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T02:49:24.526673Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T02:49:24.526713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:49:24.528152Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:49:24.528217Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:49:24.529222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-14T02:49:24.52988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T02:53:18.904499Z","caller":"traceutil/trace.go:171","msg":"trace[1952684420] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"221.232323ms","start":"2024-02-14T02:53:18.683256Z","end":"2024-02-14T02:53:18.904488Z","steps":["trace[1952684420] 'process raft request'  (duration: 221.142834ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-14T02:53:23.119214Z","caller":"traceutil/trace.go:171","msg":"trace[1103785252] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"139.305377ms","start":"2024-02-14T02:53:22.979897Z","end":"2024-02-14T02:53:23.119203Z","steps":["trace[1103785252] 'read index received'  (duration: 139.157307ms)","trace[1103785252] 'applied index is now lower than readState.Index'  (duration: 147.696µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-14T02:53:23.119308Z","caller":"traceutil/trace.go:171","msg":"trace[2099667552] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"203.286484ms","start":"2024-02-14T02:53:22.916012Z","end":"2024-02-14T02:53:23.119299Z","steps":["trace[2099667552] 'process raft request'  (duration: 203.096819ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-14T02:53:23.119297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.403844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-14T02:53:23.119415Z","caller":"traceutil/trace.go:171","msg":"trace[1534631444] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:574; }","duration":"139.53083ms","start":"2024-02-14T02:53:22.979879Z","end":"2024-02-14T02:53:23.119409Z","steps":["trace[1534631444] 'agreement among raft nodes before linearized reading'  (duration: 139.39258ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:54:15 up  1:33,  0 users,  load average: 4.07, 4.37, 4.23
	Linux skaffold-163000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [0fc9fc4c8ab8] <==
	I0214 02:49:26.516377       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 02:49:26.517658       1 aggregator.go:166] initial CRD sync complete...
	I0214 02:49:26.517687       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 02:49:26.517692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 02:49:26.517698       1 cache.go:39] Caches are synced for autoregister controller
	I0214 02:49:26.604866       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 02:49:26.605398       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 02:49:26.613734       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 02:49:26.613869       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 02:49:26.613878       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 02:49:27.418097       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0214 02:49:27.421396       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0214 02:49:27.421432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 02:49:27.699604       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 02:49:27.723984       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 02:49:27.826152       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0214 02:49:27.831581       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0214 02:49:27.832366       1 controller.go:624] quota admission added evaluator for: endpoints
	I0214 02:49:27.835110       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 02:49:28.515535       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0214 02:49:29.422999       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0214 02:49:29.430692       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0214 02:49:29.439474       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0214 02:49:42.510196       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0214 02:49:42.609130       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [90ee06db9b76] <==
	I0214 02:49:41.708891       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0214 02:49:41.709042       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0214 02:49:41.708947       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0214 02:49:41.710159       1 event.go:307] "Event occurred" object="skaffold-163000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node skaffold-163000 event: Registered Node skaffold-163000 in Controller"
	I0214 02:49:41.710162       1 shared_informer.go:318] Caches are synced for TTL
	I0214 02:49:41.711153       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0214 02:49:41.714596       1 range_allocator.go:380] "Set node PodCIDR" node="skaffold-163000" podCIDRs=["10.244.0.0/24"]
	I0214 02:49:41.723124       1 shared_informer.go:318] Caches are synced for HPA
	I0214 02:49:41.796495       1 shared_informer.go:318] Caches are synced for attach detach
	I0214 02:49:41.811286       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 02:49:41.885244       1 shared_informer.go:318] Caches are synced for daemon sets
	I0214 02:49:41.910430       1 shared_informer.go:318] Caches are synced for resource quota
	I0214 02:49:41.910497       1 shared_informer.go:318] Caches are synced for stateful set
	I0214 02:49:42.225156       1 shared_informer.go:318] Caches are synced for garbage collector
	I0214 02:49:42.257541       1 shared_informer.go:318] Caches are synced for garbage collector
	I0214 02:49:42.257583       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0214 02:49:42.513318       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I0214 02:49:42.615720       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7nrc7"
	I0214 02:49:42.712180       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zhnb4"
	I0214 02:49:42.720025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.846528ms"
	I0214 02:49:42.815435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.2807ms"
	I0214 02:49:42.815530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.671µs"
	I0214 02:49:43.841031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.492µs"
	I0214 02:49:43.856351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.687446ms"
	I0214 02:49:43.856697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.33µs"
	
	
	==> kube-proxy [f0238bd316c3] <==
	I0214 02:49:43.220615       1 server_others.go:69] "Using iptables proxy"
	I0214 02:49:43.228371       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0214 02:49:43.248809       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 02:49:43.251208       1 server_others.go:152] "Using iptables Proxier"
	I0214 02:49:43.251252       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 02:49:43.251259       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 02:49:43.251275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 02:49:43.251558       1 server.go:846] "Version info" version="v1.28.4"
	I0214 02:49:43.251592       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 02:49:43.252322       1 config.go:97] "Starting endpoint slice config controller"
	I0214 02:49:43.252339       1 config.go:188] "Starting service config controller"
	I0214 02:49:43.252350       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 02:49:43.252583       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 02:49:43.252626       1 config.go:315] "Starting node config controller"
	I0214 02:49:43.252634       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 02:49:43.353307       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0214 02:49:43.354063       1 shared_informer.go:318] Caches are synced for node config
	I0214 02:49:43.356104       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [02bd1a36754d] <==
	W0214 02:49:26.606945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 02:49:26.607027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0214 02:49:26.607119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:26.607165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:26.607437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:26.607454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:26.607510       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 02:49:26.607525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 02:49:26.607646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:26.607775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:27.437603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 02:49:27.437661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 02:49:27.473459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 02:49:27.473504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0214 02:49:27.477224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:27.477264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:27.491405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:49:27.491571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:49:27.550586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:27.550629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:27.555457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 02:49:27.555510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0214 02:49:27.618595       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 02:49:27.618646       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0214 02:49:29.525862       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 02:49:30 skaffold-163000 kubelet[2467]: E0214 02:49:30.729589    2467 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-skaffold-163000\" already exists" pod="kube-system/kube-apiserver-skaffold-163000"
	Feb 14 02:49:30 skaffold-163000 kubelet[2467]: I0214 02:49:30.819693    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-163000" podStartSLOduration=1.819643441 podCreationTimestamp="2024-02-14 02:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:30.819386001 +0000 UTC m=+1.413927857" watchObservedRunningTime="2024-02-14 02:49:30.819643441 +0000 UTC m=+1.414185281"
	Feb 14 02:49:30 skaffold-163000 kubelet[2467]: I0214 02:49:30.819793    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-163000" podStartSLOduration=1.819776285 podCreationTimestamp="2024-02-14 02:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:30.807577313 +0000 UTC m=+1.402119162" watchObservedRunningTime="2024-02-14 02:49:30.819776285 +0000 UTC m=+1.414318133"
	Feb 14 02:49:30 skaffold-163000 kubelet[2467]: I0214 02:49:30.911081    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-163000" podStartSLOduration=1.9110444819999999 podCreationTimestamp="2024-02-14 02:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:30.909703308 +0000 UTC m=+1.504245155" watchObservedRunningTime="2024-02-14 02:49:30.911044482 +0000 UTC m=+1.505586327"
	Feb 14 02:49:30 skaffold-163000 kubelet[2467]: I0214 02:49:30.932428    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-skaffold-163000" podStartSLOduration=1.932395117 podCreationTimestamp="2024-02-14 02:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:30.921714154 +0000 UTC m=+1.516256009" watchObservedRunningTime="2024-02-14 02:49:30.932395117 +0000 UTC m=+1.526936957"
	Feb 14 02:49:41 skaffold-163000 kubelet[2467]: I0214 02:49:41.845400    2467 topology_manager.go:215] "Topology Admit Handler" podUID="6a4328e9-240b-48af-ae60-1ae95984240d" podNamespace="kube-system" podName="storage-provisioner"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.012248    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6a4328e9-240b-48af-ae60-1ae95984240d-tmp\") pod \"storage-provisioner\" (UID: \"6a4328e9-240b-48af-ae60-1ae95984240d\") " pod="kube-system/storage-provisioner"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.012333    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csfc8\" (UniqueName: \"kubernetes.io/projected/6a4328e9-240b-48af-ae60-1ae95984240d-kube-api-access-csfc8\") pod \"storage-provisioner\" (UID: \"6a4328e9-240b-48af-ae60-1ae95984240d\") " pod="kube-system/storage-provisioner"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: E0214 02:49:42.118152    2467 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: E0214 02:49:42.118197    2467 projected.go:198] Error preparing data for projected volume kube-api-access-csfc8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: E0214 02:49:42.118268    2467 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4328e9-240b-48af-ae60-1ae95984240d-kube-api-access-csfc8 podName:6a4328e9-240b-48af-ae60-1ae95984240d nodeName:}" failed. No retries permitted until 2024-02-14 02:49:42.618253378 +0000 UTC m=+13.212795217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-csfc8" (UniqueName: "kubernetes.io/projected/6a4328e9-240b-48af-ae60-1ae95984240d-kube-api-access-csfc8") pod "storage-provisioner" (UID: "6a4328e9-240b-48af-ae60-1ae95984240d") : configmap "kube-root-ca.crt" not found
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.619538    2467 topology_manager.go:215] "Topology Admit Handler" podUID="1f5286e4-774a-4dd7-bd20-8de3a19cfaab" podNamespace="kube-system" podName="kube-proxy-7nrc7"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.718791    2467 topology_manager.go:215] "Topology Admit Handler" podUID="51d5a040-90cc-43fd-9640-a47425472c00" podNamespace="kube-system" podName="coredns-5dd5756b68-zhnb4"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.719541    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f5286e4-774a-4dd7-bd20-8de3a19cfaab-kube-proxy\") pod \"kube-proxy-7nrc7\" (UID: \"1f5286e4-774a-4dd7-bd20-8de3a19cfaab\") " pod="kube-system/kube-proxy-7nrc7"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.719595    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccnv\" (UniqueName: \"kubernetes.io/projected/1f5286e4-774a-4dd7-bd20-8de3a19cfaab-kube-api-access-rccnv\") pod \"kube-proxy-7nrc7\" (UID: \"1f5286e4-774a-4dd7-bd20-8de3a19cfaab\") " pod="kube-system/kube-proxy-7nrc7"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.719666    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f5286e4-774a-4dd7-bd20-8de3a19cfaab-xtables-lock\") pod \"kube-proxy-7nrc7\" (UID: \"1f5286e4-774a-4dd7-bd20-8de3a19cfaab\") " pod="kube-system/kube-proxy-7nrc7"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.719708    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f5286e4-774a-4dd7-bd20-8de3a19cfaab-lib-modules\") pod \"kube-proxy-7nrc7\" (UID: \"1f5286e4-774a-4dd7-bd20-8de3a19cfaab\") " pod="kube-system/kube-proxy-7nrc7"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.820128    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51d5a040-90cc-43fd-9640-a47425472c00-config-volume\") pod \"coredns-5dd5756b68-zhnb4\" (UID: \"51d5a040-90cc-43fd-9640-a47425472c00\") " pod="kube-system/coredns-5dd5756b68-zhnb4"
	Feb 14 02:49:42 skaffold-163000 kubelet[2467]: I0214 02:49:42.820205    2467 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhmk\" (UniqueName: \"kubernetes.io/projected/51d5a040-90cc-43fd-9640-a47425472c00-kube-api-access-tkhmk\") pod \"coredns-5dd5756b68-zhnb4\" (UID: \"51d5a040-90cc-43fd-9640-a47425472c00\") " pod="kube-system/coredns-5dd5756b68-zhnb4"
	Feb 14 02:49:43 skaffold-163000 kubelet[2467]: I0214 02:49:43.840804    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zhnb4" podStartSLOduration=1.84077299 podCreationTimestamp="2024-02-14 02:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:43.840719632 +0000 UTC m=+14.435261480" watchObservedRunningTime="2024-02-14 02:49:43.84077299 +0000 UTC m=+14.435314830"
	Feb 14 02:49:43 skaffold-163000 kubelet[2467]: I0214 02:49:43.841199    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.841176737 podCreationTimestamp="2024-02-14 02:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:43.832031791 +0000 UTC m=+14.426573639" watchObservedRunningTime="2024-02-14 02:49:43.841176737 +0000 UTC m=+14.435718584"
	Feb 14 02:49:43 skaffold-163000 kubelet[2467]: I0214 02:49:43.857950    2467 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7nrc7" podStartSLOduration=1.857829738 podCreationTimestamp="2024-02-14 02:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 02:49:43.857705602 +0000 UTC m=+14.452247451" watchObservedRunningTime="2024-02-14 02:49:43.857829738 +0000 UTC m=+14.452371586"
	Feb 14 02:49:50 skaffold-163000 kubelet[2467]: I0214 02:49:50.182389    2467 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 02:49:50 skaffold-163000 kubelet[2467]: I0214 02:49:50.183045    2467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 14 02:50:14 skaffold-163000 kubelet[2467]: I0214 02:50:14.043872    2467 scope.go:117] "RemoveContainer" containerID="2d4114038d9a439f6880825196e84d3d53957a1dd45d68920d22fae410dafd04"
	
	
	==> storage-provisioner [2d4114038d9a] <==
	I0214 02:49:43.012419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0214 02:50:13.014924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e31c065aa549] <==
	I0214 02:50:14.107148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 02:50:14.115151       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 02:50:14.115199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 02:50:14.120910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 02:50:14.121206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4776b2f8-63ac-400a-91d1-d8dca4551ab5", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' skaffold-163000_9c513fb6-2da9-4d20-b7cf-1bbc05c75451 became leader
	I0214 02:50:14.121256       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_skaffold-163000_9c513fb6-2da9-4d20-b7cf-1bbc05c75451!
	I0214 02:50:14.221729       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_skaffold-163000_9c513fb6-2da9-4d20-b7cf-1bbc05c75451!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p skaffold-163000 -n skaffold-163000
helpers_test.go:261: (dbg) Run:  kubectl --context skaffold-163000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestSkaffold FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "skaffold-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-163000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-163000: (3.020248416s)
--- FAIL: TestSkaffold (318.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (336.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0213 18:55:23.832970   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m19.766985516s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-470000 in cluster kubernetes-upgrade-470000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:55:14.858996   47742 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:55:14.859702   47742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:55:14.859711   47742 out.go:304] Setting ErrFile to fd 2...
	I0213 18:55:14.859719   47742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:55:14.860316   47742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:55:14.861884   47742 out.go:298] Setting JSON to false
	I0213 18:55:14.885239   47742 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":16173,"bootTime":1707863141,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:55:14.885360   47742 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:55:14.907494   47742 out.go:177] * [kubernetes-upgrade-470000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:55:14.972104   47742 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 18:55:14.950123   47742 notify.go:220] Checking for updates...
	I0213 18:55:15.017035   47742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:55:15.058670   47742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:55:15.122939   47742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:55:15.165561   47742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 18:55:15.207899   47742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 18:55:15.229715   47742 config.go:182] Loaded profile config "missing-upgrade-807000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 18:55:15.229841   47742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:55:15.286377   47742 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:55:15.286530   47742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:55:15.394736   47742 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:55:15.384587652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:55:15.437302   47742 out.go:177] * Using the docker driver based on user configuration
	I0213 18:55:15.460363   47742 start.go:298] selected driver: docker
	I0213 18:55:15.460376   47742 start.go:902] validating driver "docker" against <nil>
	I0213 18:55:15.460384   47742 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 18:55:15.463760   47742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:55:15.572268   47742 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:55:15.562237278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:55:15.572455   47742 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:55:15.572638   47742 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 18:55:15.593685   47742 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 18:55:15.614775   47742 cni.go:84] Creating CNI manager for ""
	I0213 18:55:15.614793   47742 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 18:55:15.614801   47742 start_flags.go:321] config:
	{Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:55:15.635729   47742 out.go:177] * Starting control plane node kubernetes-upgrade-470000 in cluster kubernetes-upgrade-470000
	I0213 18:55:15.677739   47742 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:55:15.698740   47742 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:55:15.740733   47742 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:55:15.740784   47742 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 18:55:15.740800   47742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:55:15.740805   47742 cache.go:56] Caching tarball of preloaded images
	I0213 18:55:15.740987   47742 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 18:55:15.741000   47742 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 18:55:15.741576   47742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/config.json ...
	I0213 18:55:15.741776   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/config.json: {Name:mkc0daa4af5773df371687ee99f4c3ded428f464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:15.793841   47742 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 18:55:15.793879   47742 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 18:55:15.793904   47742 cache.go:194] Successfully downloaded all kic artifacts
	I0213 18:55:15.793940   47742 start.go:365] acquiring machines lock for kubernetes-upgrade-470000: {Name:mkdfc57245ba73336bfea94694a4695b8e69a0e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 18:55:15.794098   47742 start.go:369] acquired machines lock for "kubernetes-upgrade-470000" in 144.868µs
	I0213 18:55:15.794127   47742 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-470000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 18:55:15.794190   47742 start.go:125] createHost starting for "" (driver="docker")
	I0213 18:55:15.815724   47742 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0213 18:55:15.815988   47742 start.go:159] libmachine.API.Create for "kubernetes-upgrade-470000" (driver="docker")
	I0213 18:55:15.816042   47742 client.go:168] LocalClient.Create starting
	I0213 18:55:15.816229   47742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem
	I0213 18:55:15.816300   47742 main.go:141] libmachine: Decoding PEM data...
	I0213 18:55:15.816326   47742 main.go:141] libmachine: Parsing certificate...
	I0213 18:55:15.816395   47742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem
	I0213 18:55:15.816448   47742 main.go:141] libmachine: Decoding PEM data...
	I0213 18:55:15.816460   47742 main.go:141] libmachine: Parsing certificate...
	I0213 18:55:15.837061   47742 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-470000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 18:55:15.889387   47742 cli_runner.go:211] docker network inspect kubernetes-upgrade-470000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 18:55:15.889506   47742 network_create.go:281] running [docker network inspect kubernetes-upgrade-470000] to gather additional debugging logs...
	I0213 18:55:15.889525   47742 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-470000
	W0213 18:55:15.943159   47742 cli_runner.go:211] docker network inspect kubernetes-upgrade-470000 returned with exit code 1
	I0213 18:55:15.943206   47742 network_create.go:284] error running [docker network inspect kubernetes-upgrade-470000]: docker network inspect kubernetes-upgrade-470000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-470000 not found
	I0213 18:55:15.943222   47742 network_create.go:286] output of [docker network inspect kubernetes-upgrade-470000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-470000 not found
	
	** /stderr **
	I0213 18:55:15.943366   47742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 18:55:15.995835   47742 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 18:55:15.996207   47742 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001372fe0}
	I0213 18:55:15.996225   47742 network_create.go:124] attempt to create docker network kubernetes-upgrade-470000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 18:55:15.996306   47742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 kubernetes-upgrade-470000
	W0213 18:55:16.049447   47742 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 kubernetes-upgrade-470000 returned with exit code 1
	W0213 18:55:16.049498   47742 network_create.go:149] failed to create docker network kubernetes-upgrade-470000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 kubernetes-upgrade-470000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 18:55:16.049517   47742 network_create.go:116] failed to create docker network kubernetes-upgrade-470000 192.168.58.0/24, will retry: subnet is taken
	I0213 18:55:16.051094   47742 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 18:55:16.051457   47742 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013fdac0}
	I0213 18:55:16.051470   47742 network_create.go:124] attempt to create docker network kubernetes-upgrade-470000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 18:55:16.051549   47742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 kubernetes-upgrade-470000
	I0213 18:55:16.140746   47742 network_create.go:108] docker network kubernetes-upgrade-470000 192.168.67.0/24 created
	I0213 18:55:16.140780   47742 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-470000" container
	I0213 18:55:16.140898   47742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 18:55:16.194979   47742 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-470000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 --label created_by.minikube.sigs.k8s.io=true
	I0213 18:55:16.248569   47742 oci.go:103] Successfully created a docker volume kubernetes-upgrade-470000
	I0213 18:55:16.248780   47742 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-470000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 --entrypoint /usr/bin/test -v kubernetes-upgrade-470000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 18:55:16.656249   47742 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-470000
	I0213 18:55:16.656291   47742 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:55:16.656307   47742 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 18:55:16.656410   47742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-470000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 18:55:18.965565   47742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-470000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.309115674s)
	I0213 18:55:18.965597   47742 kic.go:203] duration metric: took 2.309327 seconds to extract preloaded images to volume
	I0213 18:55:18.965716   47742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 18:55:19.071860   47742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-470000 --name kubernetes-upgrade-470000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-470000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-470000 --network kubernetes-upgrade-470000 --ip 192.168.67.2 --volume kubernetes-upgrade-470000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 18:55:19.347843   47742 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Running}}
	I0213 18:55:19.403065   47742 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 18:55:19.460979   47742 cli_runner.go:164] Run: docker exec kubernetes-upgrade-470000 stat /var/lib/dpkg/alternatives/iptables
	I0213 18:55:19.618276   47742 oci.go:144] the created container "kubernetes-upgrade-470000" has a running status.
	I0213 18:55:19.618355   47742 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa...
	I0213 18:55:19.707408   47742 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 18:55:19.778592   47742 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 18:55:19.836521   47742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 18:55:19.836544   47742 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-470000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 18:55:19.957534   47742 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 18:55:20.010078   47742 machine.go:88] provisioning docker machine ...
	I0213 18:55:20.010140   47742 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-470000"
	I0213 18:55:20.010246   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:20.126640   47742 main.go:141] libmachine: Using SSH client type: native
	I0213 18:55:20.127060   47742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54967 <nil> <nil>}
	I0213 18:55:20.127072   47742 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470000 && echo "kubernetes-upgrade-470000" | sudo tee /etc/hostname
	I0213 18:55:20.289149   47742 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470000
	
	I0213 18:55:20.289229   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:20.341754   47742 main.go:141] libmachine: Using SSH client type: native
	I0213 18:55:20.342043   47742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54967 <nil> <nil>}
	I0213 18:55:20.342056   47742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 18:55:20.480634   47742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 18:55:20.480652   47742 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 18:55:20.480671   47742 ubuntu.go:177] setting up certificates
	I0213 18:55:20.480684   47742 provision.go:83] configureAuth start
	I0213 18:55:20.480759   47742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470000
	I0213 18:55:20.533183   47742 provision.go:138] copyHostCerts
	I0213 18:55:20.533276   47742 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 18:55:20.533286   47742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 18:55:20.533434   47742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 18:55:20.533665   47742 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 18:55:20.533671   47742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 18:55:20.533758   47742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 18:55:20.533960   47742 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 18:55:20.533968   47742 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 18:55:20.534794   47742 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 18:55:20.534974   47742 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-470000]
	I0213 18:55:20.638379   47742 provision.go:172] copyRemoteCerts
	I0213 18:55:20.638461   47742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 18:55:20.638528   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:20.690340   47742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54967 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 18:55:20.796934   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 18:55:20.837920   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 18:55:20.879459   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 18:55:20.919368   47742 provision.go:86] duration metric: configureAuth took 438.674431ms
	I0213 18:55:20.919382   47742 ubuntu.go:193] setting minikube options for container-runtime
	I0213 18:55:20.919533   47742 config.go:182] Loaded profile config "kubernetes-upgrade-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 18:55:20.919605   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:20.973764   47742 main.go:141] libmachine: Using SSH client type: native
	I0213 18:55:20.974068   47742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54967 <nil> <nil>}
	I0213 18:55:20.974082   47742 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 18:55:21.111663   47742 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 18:55:21.111675   47742 ubuntu.go:71] root file system type: overlay
	I0213 18:55:21.111751   47742 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 18:55:21.111828   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:21.165344   47742 main.go:141] libmachine: Using SSH client type: native
	I0213 18:55:21.165651   47742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54967 <nil> <nil>}
	I0213 18:55:21.165699   47742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 18:55:21.322076   47742 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 18:55:21.322190   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:21.375226   47742 main.go:141] libmachine: Using SSH client type: native
	I0213 18:55:21.375518   47742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54967 <nil> <nil>}
	I0213 18:55:21.375539   47742 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 18:55:22.002021   47742 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 02:55:21.317842394 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 18:55:22.002062   47742 machine.go:91] provisioned docker machine in 1.991954389s
	I0213 18:55:22.002072   47742 client.go:171] LocalClient.Create took 6.186116598s
	I0213 18:55:22.002102   47742 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-470000" took 6.186199478s
	I0213 18:55:22.002112   47742 start.go:300] post-start starting for "kubernetes-upgrade-470000" (driver="docker")
	I0213 18:55:22.002140   47742 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 18:55:22.002215   47742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 18:55:22.002334   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:22.056312   47742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54967 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 18:55:22.160020   47742 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 18:55:22.164578   47742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 18:55:22.164608   47742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 18:55:22.164617   47742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 18:55:22.164622   47742 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 18:55:22.164632   47742 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 18:55:22.164728   47742 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 18:55:22.164930   47742 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 18:55:22.165163   47742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 18:55:22.179945   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:55:22.220328   47742 start.go:303] post-start completed in 218.184783ms
	I0213 18:55:22.221098   47742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470000
	I0213 18:55:22.274796   47742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/config.json ...
	I0213 18:55:22.275257   47742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 18:55:22.275324   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:22.328481   47742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54967 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 18:55:22.423997   47742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 18:55:22.428996   47742 start.go:128] duration metric: createHost completed in 6.634893356s
	I0213 18:55:22.429016   47742 start.go:83] releasing machines lock for "kubernetes-upgrade-470000", held for 6.635014435s
	I0213 18:55:22.429098   47742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470000
	I0213 18:55:22.481001   47742 ssh_runner.go:195] Run: cat /version.json
	I0213 18:55:22.481018   47742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 18:55:22.481085   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:22.481098   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:22.538787   47742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54967 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 18:55:22.538803   47742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54967 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 18:55:22.740313   47742 ssh_runner.go:195] Run: systemctl --version
	I0213 18:55:22.744973   47742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 18:55:22.749967   47742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 18:55:22.792103   47742 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 18:55:22.792172   47742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 18:55:22.820221   47742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 18:55:22.849243   47742 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 18:55:22.849260   47742 start.go:475] detecting cgroup driver to use...
	I0213 18:55:22.849300   47742 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:55:22.849448   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:55:22.878413   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 18:55:22.894992   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 18:55:22.911479   47742 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 18:55:22.911582   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 18:55:22.928812   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:55:22.944529   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 18:55:22.960457   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 18:55:22.977051   47742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 18:55:22.993553   47742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 18:55:23.010353   47742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 18:55:23.024929   47742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 18:55:23.039622   47742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:55:23.100545   47742 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 18:55:23.191573   47742 start.go:475] detecting cgroup driver to use...
	I0213 18:55:23.191597   47742 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 18:55:23.191657   47742 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 18:55:23.212152   47742 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 18:55:23.212277   47742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 18:55:23.233579   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 18:55:23.265723   47742 ssh_runner.go:195] Run: which cri-dockerd
	I0213 18:55:23.270619   47742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 18:55:23.286986   47742 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 18:55:23.321590   47742 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 18:55:23.411932   47742 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 18:55:23.478571   47742 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 18:55:23.478711   47742 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 18:55:23.508902   47742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:55:23.567411   47742 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 18:55:23.811307   47742 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:55:23.836577   47742 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 18:55:23.906968   47742 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 18:55:23.907082   47742 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-470000 dig +short host.docker.internal
	I0213 18:55:24.018303   47742 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 18:55:24.018390   47742 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 18:55:24.023041   47742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:55:24.040702   47742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 18:55:24.094943   47742 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:55:24.095030   47742 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:55:24.113328   47742 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 18:55:24.113343   47742 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 18:55:24.113409   47742 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 18:55:24.129013   47742 ssh_runner.go:195] Run: which lz4
	I0213 18:55:24.133504   47742 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 18:55:24.138279   47742 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 18:55:24.138305   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 18:55:30.555938   47742 docker.go:649] Took 6.422633 seconds to copy over tarball
	I0213 18:55:30.556012   47742 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 18:55:33.632427   47742 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.07644013s)
	I0213 18:55:33.632451   47742 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 18:55:33.706174   47742 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 18:55:33.731018   47742 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 18:55:33.776856   47742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 18:55:33.877180   47742 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 18:55:35.528719   47742 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.651543492s)
	I0213 18:55:35.528808   47742 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 18:55:35.547421   47742 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 18:55:35.547433   47742 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 18:55:35.547444   47742 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 18:55:35.552485   47742 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 18:55:35.552554   47742 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:55:35.552875   47742 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 18:55:35.553410   47742 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 18:55:35.553593   47742 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 18:55:35.554252   47742 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 18:55:35.554318   47742 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 18:55:35.554469   47742 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 18:55:35.559507   47742 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:55:35.561456   47742 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 18:55:35.561635   47742 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 18:55:35.563861   47742 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 18:55:35.564059   47742 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 18:55:35.564238   47742 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 18:55:35.564373   47742 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 18:55:35.564503   47742 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 18:55:37.571848   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 18:55:37.593574   47742 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 18:55:37.593609   47742 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 18:55:37.593675   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 18:55:37.612666   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 18:55:37.624247   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 18:55:37.644855   47742 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 18:55:37.644879   47742 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 18:55:37.644937   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 18:55:37.665465   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 18:55:37.675688   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 18:55:37.675989   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 18:55:37.683456   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 18:55:37.694738   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 18:55:37.701020   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 18:55:37.704112   47742 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 18:55:37.704177   47742 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 18:55:37.704212   47742 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 18:55:37.704237   47742 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 18:55:37.704323   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 18:55:37.704377   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 18:55:37.712100   47742 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 18:55:37.712141   47742 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 18:55:37.712256   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 18:55:37.725617   47742 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 18:55:37.725680   47742 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 18:55:37.725796   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 18:55:37.732309   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 18:55:37.732347   47742 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 18:55:37.732387   47742 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 18:55:37.732422   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 18:55:37.732505   47742 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 18:55:37.740908   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 18:55:37.755060   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 18:55:37.810144   47742 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 18:55:37.811184   47742 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 18:55:37.829205   47742 cache_images.go:92] LoadImages completed in 2.281783758s
	W0213 18:55:37.829260   47742 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0213 18:55:37.829355   47742 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 18:55:37.882097   47742 cni.go:84] Creating CNI manager for ""
	I0213 18:55:37.882114   47742 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 18:55:37.882135   47742 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 18:55:37.882151   47742 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470000 NodeName:kubernetes-upgrade-470000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 18:55:37.882245   47742 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-470000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-470000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 18:55:37.882294   47742 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-470000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 18:55:37.882362   47742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 18:55:37.899665   47742 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 18:55:37.899735   47742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 18:55:37.917074   47742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0213 18:55:37.947939   47742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 18:55:37.978538   47742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0213 18:55:38.009766   47742 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 18:55:38.014422   47742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 18:55:38.033186   47742 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000 for IP: 192.168.67.2
	I0213 18:55:38.033207   47742 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.033420   47742 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 18:55:38.033514   47742 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 18:55:38.033575   47742 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key
	I0213 18:55:38.033591   47742 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt with IP's: []
	I0213 18:55:38.276185   47742 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt ...
	I0213 18:55:38.276203   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt: {Name:mk8bd39d18d1ceec889707aabd764ff9ddc95b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.276676   47742 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key ...
	I0213 18:55:38.276696   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key: {Name:mk161cc16f341d9d011b196256e3924b4316541d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.276945   47742 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key.c7fa3a9e
	I0213 18:55:38.276961   47742 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 18:55:38.379855   47742 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt.c7fa3a9e ...
	I0213 18:55:38.379871   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt.c7fa3a9e: {Name:mkd2b41d4f38b57d82f10f9dba007795d7f88ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.380204   47742 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key.c7fa3a9e ...
	I0213 18:55:38.380214   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key.c7fa3a9e: {Name:mk4ddd5348bd62afa1d3da8ce0ae1ede26ff9faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.380453   47742 certs.go:337] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt
	I0213 18:55:38.380671   47742 certs.go:341] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key
	I0213 18:55:38.380892   47742 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key
	I0213 18:55:38.380925   47742 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.crt with IP's: []
	I0213 18:55:38.717554   47742 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.crt ...
	I0213 18:55:38.717582   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.crt: {Name:mk75f097402f00e0ab943e4be4f0209d5c0e4f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.717933   47742 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key ...
	I0213 18:55:38.717943   47742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key: {Name:mkb20ad126c14a6a35164dce84912a3c40011d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:55:38.718387   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 18:55:38.718451   47742 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 18:55:38.718466   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 18:55:38.718501   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 18:55:38.718542   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 18:55:38.718577   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 18:55:38.718651   47742 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 18:55:38.719198   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 18:55:38.771493   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 18:55:38.829643   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 18:55:38.879209   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 18:55:38.931389   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 18:55:38.989793   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 18:55:39.043395   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 18:55:39.103470   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 18:55:39.157512   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 18:55:39.211340   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 18:55:39.269059   47742 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 18:55:39.320312   47742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 18:55:39.352343   47742 ssh_runner.go:195] Run: openssl version
	I0213 18:55:39.358691   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 18:55:39.376252   47742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:55:39.380933   47742 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:55:39.380992   47742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 18:55:39.389115   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 18:55:39.407712   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 18:55:39.425205   47742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 18:55:39.430136   47742 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 18:55:39.430191   47742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 18:55:39.437614   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 18:55:39.455141   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 18:55:39.472171   47742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 18:55:39.476946   47742 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 18:55:39.477001   47742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 18:55:39.484187   47742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 18:55:39.502474   47742 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 18:55:39.507206   47742 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 18:55:39.507261   47742 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:55:39.507371   47742 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 18:55:39.526106   47742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 18:55:39.542446   47742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 18:55:39.559499   47742 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 18:55:39.559566   47742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 18:55:39.576219   47742 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 18:55:39.576264   47742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 18:55:39.637419   47742 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 18:55:39.637459   47742 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 18:55:39.901277   47742 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 18:55:39.901388   47742 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 18:55:39.901492   47742 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 18:55:40.084083   47742 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 18:55:40.098267   47742 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 18:55:40.106107   47742 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 18:55:40.179278   47742 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 18:55:40.200181   47742 out.go:204]   - Generating certificates and keys ...
	I0213 18:55:40.200272   47742 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 18:55:40.200362   47742 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 18:55:40.356292   47742 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 18:55:40.503240   47742 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 18:55:40.556840   47742 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 18:55:40.614987   47742 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 18:55:40.758888   47742 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 18:55:40.759008   47742 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 18:55:40.861610   47742 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 18:55:40.861741   47742 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 18:55:40.945147   47742 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 18:55:41.074425   47742 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 18:55:41.363364   47742 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 18:55:41.363431   47742 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 18:55:41.452432   47742 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 18:55:41.695613   47742 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 18:55:41.974255   47742 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 18:55:42.024932   47742 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 18:55:42.025528   47742 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 18:55:42.054081   47742 out.go:204]   - Booting up control plane ...
	I0213 18:55:42.054164   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 18:55:42.054229   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 18:55:42.054312   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 18:55:42.054395   47742 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 18:55:42.054526   47742 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 18:56:22.034494   47742 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 18:56:22.035377   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:56:22.035657   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:56:27.036950   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:56:27.037115   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:56:37.039294   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:56:37.039512   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:56:57.041631   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:56:57.041835   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:57:37.098166   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:57:37.098413   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:57:37.098434   47742 kubeadm.go:322] 
	I0213 18:57:37.098493   47742 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 18:57:37.098549   47742 kubeadm.go:322] 	timed out waiting for the condition
	I0213 18:57:37.098564   47742 kubeadm.go:322] 
	I0213 18:57:37.098630   47742 kubeadm.go:322] This error is likely caused by:
	I0213 18:57:37.098672   47742 kubeadm.go:322] 	- The kubelet is not running
	I0213 18:57:37.098782   47742 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 18:57:37.098794   47742 kubeadm.go:322] 
	I0213 18:57:37.098891   47742 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 18:57:37.098944   47742 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 18:57:37.099019   47742 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 18:57:37.099050   47742 kubeadm.go:322] 
	I0213 18:57:37.099212   47742 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 18:57:37.099348   47742 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 18:57:37.099456   47742 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 18:57:37.099522   47742 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 18:57:37.099609   47742 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 18:57:37.099640   47742 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 18:57:37.103922   47742 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 18:57:37.104001   47742 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 18:57:37.104109   47742 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 18:57:37.104181   47742 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 18:57:37.104257   47742 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 18:57:37.104315   47742 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 18:57:37.104381   47742 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 18:57:37.104420   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 18:57:37.551907   47742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 18:57:37.573903   47742 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 18:57:37.573971   47742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 18:57:37.592564   47742 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 18:57:37.592596   47742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 18:57:37.658406   47742 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 18:57:37.658732   47742 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 18:57:37.982931   47742 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 18:57:37.983060   47742 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 18:57:37.983160   47742 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 18:57:38.203941   47742 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 18:57:38.204058   47742 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 18:57:38.212766   47742 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 18:57:38.290695   47742 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 18:57:38.312137   47742 out.go:204]   - Generating certificates and keys ...
	I0213 18:57:38.312337   47742 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 18:57:38.312517   47742 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 18:57:38.312617   47742 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 18:57:38.312704   47742 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 18:57:38.312852   47742 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 18:57:38.312918   47742 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 18:57:38.312999   47742 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 18:57:38.313092   47742 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 18:57:38.313189   47742 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 18:57:38.313272   47742 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 18:57:38.313349   47742 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 18:57:38.313455   47742 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 18:57:38.600266   47742 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 18:57:38.812440   47742 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 18:57:38.887485   47742 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 18:57:38.981460   47742 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 18:57:38.983124   47742 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 18:57:39.004752   47742 out.go:204]   - Booting up control plane ...
	I0213 18:57:39.004871   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 18:57:39.004983   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 18:57:39.005067   47742 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 18:57:39.005162   47742 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 18:57:39.005342   47742 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 18:58:18.993314   47742 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 18:58:18.994481   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:58:18.994704   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:58:23.995301   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:58:23.995476   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:58:33.996525   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:58:33.996685   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:58:53.998312   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:58:53.998513   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:59:33.998971   47742 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 18:59:33.999153   47742 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 18:59:33.999167   47742 kubeadm.go:322] 
	I0213 18:59:33.999203   47742 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 18:59:33.999238   47742 kubeadm.go:322] 	timed out waiting for the condition
	I0213 18:59:33.999251   47742 kubeadm.go:322] 
	I0213 18:59:33.999285   47742 kubeadm.go:322] This error is likely caused by:
	I0213 18:59:33.999318   47742 kubeadm.go:322] 	- The kubelet is not running
	I0213 18:59:33.999400   47742 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 18:59:33.999408   47742 kubeadm.go:322] 
	I0213 18:59:33.999501   47742 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 18:59:33.999571   47742 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 18:59:33.999627   47742 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 18:59:33.999643   47742 kubeadm.go:322] 
	I0213 18:59:33.999755   47742 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 18:59:33.999859   47742 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 18:59:33.999954   47742 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 18:59:33.999999   47742 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 18:59:34.000065   47742 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 18:59:34.000092   47742 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 18:59:34.004687   47742 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 18:59:34.004769   47742 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 18:59:34.004925   47742 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 18:59:34.005029   47742 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 18:59:34.005119   47742 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 18:59:34.005192   47742 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 18:59:34.005212   47742 kubeadm.go:406] StartCluster complete in 3m54.444720747s
	I0213 18:59:34.005297   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 18:59:34.023659   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.023674   47742 logs.go:278] No container was found matching "kube-apiserver"
	I0213 18:59:34.023745   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 18:59:34.042872   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.042887   47742 logs.go:278] No container was found matching "etcd"
	I0213 18:59:34.042958   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 18:59:34.060676   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.060690   47742 logs.go:278] No container was found matching "coredns"
	I0213 18:59:34.060778   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 18:59:34.078341   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.078355   47742 logs.go:278] No container was found matching "kube-scheduler"
	I0213 18:59:34.078422   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 18:59:34.095847   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.095861   47742 logs.go:278] No container was found matching "kube-proxy"
	I0213 18:59:34.095928   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 18:59:34.114604   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.114627   47742 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 18:59:34.114712   47742 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 18:59:34.132886   47742 logs.go:276] 0 containers: []
	W0213 18:59:34.132900   47742 logs.go:278] No container was found matching "kindnet"
	I0213 18:59:34.132908   47742 logs.go:123] Gathering logs for kubelet ...
	I0213 18:59:34.132916   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 18:59:34.176398   47742 logs.go:123] Gathering logs for dmesg ...
	I0213 18:59:34.176415   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 18:59:34.196945   47742 logs.go:123] Gathering logs for describe nodes ...
	I0213 18:59:34.196959   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 18:59:34.299696   47742 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 18:59:34.299711   47742 logs.go:123] Gathering logs for Docker ...
	I0213 18:59:34.299721   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 18:59:34.322511   47742 logs.go:123] Gathering logs for container status ...
	I0213 18:59:34.322528   47742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 18:59:34.383878   47742 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 18:59:34.383914   47742 out.go:239] * 
	* 
	W0213 18:59:34.383949   47742 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 18:59:34.383962   47742 out.go:239] * 
	* 
	W0213 18:59:34.384632   47742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 18:59:34.463992   47742 out.go:177] 
	W0213 18:59:34.512053   47742 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 18:59:34.512133   47742 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 18:59:34.512165   47742 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 18:59:34.534139   47742 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-470000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-470000: (1.639192252s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-470000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-470000 status --format={{.Host}}: exit status 7 (119.866068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
E0213 18:59:40.406659   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (31.301555819s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-470000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (481.956715ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-470000
	    minikube start -p kubernetes-upgrade-470000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4700002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-470000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
E0213 19:00:23.885843   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-470000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (35.525154258s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-13 19:00:43.812991 -0800 PST m=+3182.735702286
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-470000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-470000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec",
	        "Created": "2024-02-14T02:55:19.126078325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:59:37.739104056Z",
	            "FinishedAt": "2024-02-14T02:59:35.150346566Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec/hosts",
	        "LogPath": "/var/lib/docker/containers/5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec/5ae1cd3b581f9e093bf27588647a6a530f63b3615289a3faf74d906be41734ec-json.log",
	        "Name": "/kubernetes-upgrade-470000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-470000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-470000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9ff687e1cd7a682bd6ee4ce504c9aec6659df05b4fd2a55c3fed050cfa62b665-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ff687e1cd7a682bd6ee4ce504c9aec6659df05b4fd2a55c3fed050cfa62b665/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ff687e1cd7a682bd6ee4ce504c9aec6659df05b4fd2a55c3fed050cfa62b665/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ff687e1cd7a682bd6ee4ce504c9aec6659df05b4fd2a55c3fed050cfa62b665/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-470000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-470000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-470000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-470000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-470000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b1a91e3ce3577e7bf1aba0f8e3ae95bc085647f217fb02da41fee63a80604d0",
	            "SandboxKey": "/var/run/docker/netns/5b1a91e3ce35",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55248"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55249"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55245"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55246"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55247"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-470000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ae1cd3b581f",
	                        "kubernetes-upgrade-470000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "114f0594e736d2f5619267e1c82ac4c13bc3ffe289ac25e785ba5477911118cf",
	                    "EndpointID": "4436a8633086d088d2f3f22fdda75a00b362658214e2840d85200c2bc4d8cfff",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-470000",
	                        "5ae1cd3b581f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-470000 -n kubernetes-upgrade-470000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-470000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-470000 logs -n 25: (3.19161426s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-210000 sudo                  | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo                  | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo cat              | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo cat              | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo                  | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo                  | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo                  | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo find             | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-210000 sudo crio             | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-210000                       | cilium-210000             | jenkins | v1.32.0 | 13 Feb 24 18:54 PST | 13 Feb 24 18:54 PST |
	| start   | -p missing-upgrade-807000              | minikube                  | jenkins | v1.26.0 | 13 Feb 24 18:54 PST | 13 Feb 24 18:56 PST |
	|         | --memory=2200 --driver=docker          |                           |         |         |                     |                     |
	| delete  | -p offline-docker-855000               | offline-docker-855000     | jenkins | v1.32.0 | 13 Feb 24 18:55 PST | 13 Feb 24 18:55 PST |
	| start   | -p kubernetes-upgrade-470000           | kubernetes-upgrade-470000 | jenkins | v1.32.0 | 13 Feb 24 18:55 PST |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-807000              | missing-upgrade-807000    | jenkins | v1.32.0 | 13 Feb 24 18:57 PST | 13 Feb 24 18:57 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-807000              | missing-upgrade-807000    | jenkins | v1.32.0 | 13 Feb 24 18:57 PST | 13 Feb 24 18:58 PST |
	| start   | -p stopped-upgrade-064000              | minikube                  | jenkins | v1.26.0 | 13 Feb 24 18:58 PST | 13 Feb 24 18:58 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --vm-driver=docker                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-064000 stop            | minikube                  | jenkins | v1.26.0 | 13 Feb 24 18:58 PST | 13 Feb 24 18:58 PST |
	| start   | -p stopped-upgrade-064000              | stopped-upgrade-064000    | jenkins | v1.32.0 | 13 Feb 24 18:58 PST | 13 Feb 24 18:59 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-064000              | stopped-upgrade-064000    | jenkins | v1.32.0 | 13 Feb 24 18:59 PST | 13 Feb 24 18:59 PST |
	| start   | -p running-upgrade-323000              | minikube                  | jenkins | v1.26.0 | 13 Feb 24 18:59 PST | 13 Feb 24 19:00 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --vm-driver=docker                     |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470000           | kubernetes-upgrade-470000 | jenkins | v1.32.0 | 13 Feb 24 18:59 PST | 13 Feb 24 18:59 PST |
	| start   | -p kubernetes-upgrade-470000           | kubernetes-upgrade-470000 | jenkins | v1.32.0 | 13 Feb 24 18:59 PST | 13 Feb 24 19:00 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p running-upgrade-323000              | running-upgrade-323000    | jenkins | v1.32.0 | 13 Feb 24 19:00 PST |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470000           | kubernetes-upgrade-470000 | jenkins | v1.32.0 | 13 Feb 24 19:00 PST |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470000           | kubernetes-upgrade-470000 | jenkins | v1.32.0 | 13 Feb 24 19:00 PST | 13 Feb 24 19:00 PST |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 19:00:08
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 19:00:08.340373   48858 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:00:08.340558   48858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:00:08.340563   48858 out.go:304] Setting ErrFile to fd 2...
	I0213 19:00:08.340567   48858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:00:08.340740   48858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:00:08.342105   48858 out.go:298] Setting JSON to false
	I0213 19:00:08.365495   48858 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":16467,"bootTime":1707863141,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:00:08.365690   48858 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:00:08.387114   48858 out.go:177] * [kubernetes-upgrade-470000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:00:08.429870   48858 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:00:08.429944   48858 notify.go:220] Checking for updates...
	I0213 19:00:08.471573   48858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:00:08.513863   48858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:00:08.534686   48858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:00:08.556079   48858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:00:08.615681   48858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:00:08.637712   48858 config.go:182] Loaded profile config "kubernetes-upgrade-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:00:08.638477   48858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:00:08.697198   48858 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:00:08.697371   48858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:00:08.804441   48858 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-14 03:00:08.793037428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:00:08.826297   48858 out.go:177] * Using the docker driver based on existing profile
	I0213 19:00:08.868962   48858 start.go:298] selected driver: docker
	I0213 19:00:08.868994   48858 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-470000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:00:08.869074   48858 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:00:08.872467   48858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:00:08.978849   48858 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-14 03:00:08.967939345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:00:08.979104   48858 cni.go:84] Creating CNI manager for ""
	I0213 19:00:08.979119   48858 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:00:08.979131   48858 start_flags.go:321] config:
	{Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:00:09.021646   48858 out.go:177] * Starting control plane node kubernetes-upgrade-470000 in cluster kubernetes-upgrade-470000
	I0213 19:00:09.042756   48858 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:00:09.064753   48858 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:00:09.085592   48858 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 19:00:09.085657   48858 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 19:00:09.085642   48858 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:00:09.085684   48858 cache.go:56] Caching tarball of preloaded images
	I0213 19:00:09.085879   48858 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:00:09.085896   48858 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 19:00:09.086451   48858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/config.json ...
	I0213 19:00:09.138375   48858 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:00:09.138507   48858 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:00:09.138526   48858 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:00:09.138571   48858 start.go:365] acquiring machines lock for kubernetes-upgrade-470000: {Name:mkdfc57245ba73336bfea94694a4695b8e69a0e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:00:09.138659   48858 start.go:369] acquired machines lock for "kubernetes-upgrade-470000" in 68.547µs
	I0213 19:00:09.138681   48858 start.go:96] Skipping create...Using existing machine configuration
	I0213 19:00:09.138690   48858 fix.go:54] fixHost starting: 
	I0213 19:00:09.138930   48858 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 19:00:09.191389   48858 fix.go:102] recreateIfNeeded on kubernetes-upgrade-470000: state=Running err=<nil>
	W0213 19:00:09.191421   48858 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 19:00:09.213114   48858 out.go:177] * Updating the running docker "kubernetes-upgrade-470000" container ...
	I0213 19:00:07.586275   48781 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (2.297846272s)
	I0213 19:00:07.586303   48781 start.go:475] detecting cgroup driver to use...
	I0213 19:00:07.586334   48781 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:00:07.586397   48781 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:00:07.603971   48781 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:00:07.604043   48781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:00:07.622665   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:00:07.653011   48781 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:00:07.656988   48781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:00:07.671125   48781 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:00:07.698895   48781 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:00:07.855726   48781 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:00:07.995730   48781 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:00:07.995863   48781 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:00:08.023943   48781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:08.092387   48781 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:00:09.254798   48858 machine.go:88] provisioning docker machine ...
	I0213 19:00:09.254858   48858 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-470000"
	I0213 19:00:09.255012   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:09.307870   48858 main.go:141] libmachine: Using SSH client type: native
	I0213 19:00:09.308228   48858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0213 19:00:09.308245   48858 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470000 && echo "kubernetes-upgrade-470000" | sudo tee /etc/hostname
	I0213 19:00:09.470774   48858 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470000
	
	I0213 19:00:09.470866   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:09.524013   48858 main.go:141] libmachine: Using SSH client type: native
	I0213 19:00:09.524302   48858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0213 19:00:09.524315   48858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:00:09.662650   48858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:00:09.662670   48858 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:00:09.662688   48858 ubuntu.go:177] setting up certificates
	I0213 19:00:09.662699   48858 provision.go:83] configureAuth start
	I0213 19:00:09.662773   48858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470000
	I0213 19:00:09.715247   48858 provision.go:138] copyHostCerts
	I0213 19:00:09.715350   48858 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:00:09.715362   48858 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:00:09.715488   48858 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:00:09.715723   48858 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:00:09.715730   48858 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:00:09.715800   48858 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:00:09.715964   48858 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:00:09.715970   48858 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:00:09.716042   48858 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:00:09.716192   48858 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-470000]
	I0213 19:00:09.921092   48858 provision.go:172] copyRemoteCerts
	I0213 19:00:09.921162   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:00:09.921245   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:09.974350   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:10.078507   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:00:10.119363   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 19:00:10.159917   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 19:00:10.199731   48858 provision.go:86] duration metric: configureAuth took 537.019864ms
	I0213 19:00:10.199745   48858 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:00:10.199882   48858 config.go:182] Loaded profile config "kubernetes-upgrade-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:00:10.199963   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:10.255261   48858 main.go:141] libmachine: Using SSH client type: native
	I0213 19:00:10.255558   48858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0213 19:00:10.255570   48858 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:00:10.391982   48858 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:00:10.391998   48858 ubuntu.go:71] root file system type: overlay
	I0213 19:00:10.392076   48858 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:00:10.392158   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:10.444603   48858 main.go:141] libmachine: Using SSH client type: native
	I0213 19:00:10.444899   48858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0213 19:00:10.444951   48858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:00:10.604944   48858 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:00:10.605044   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:10.657049   48858 main.go:141] libmachine: Using SSH client type: native
	I0213 19:00:10.657333   48858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0213 19:00:10.657347   48858 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:00:10.803294   48858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:00:10.803310   48858 machine.go:91] provisioned docker machine in 1.548501182s
	I0213 19:00:10.803321   48858 start.go:300] post-start starting for "kubernetes-upgrade-470000" (driver="docker")
	I0213 19:00:10.803330   48858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:00:10.803391   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:00:10.803458   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:10.855514   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:10.960934   48858 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:00:10.965191   48858 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:00:10.965218   48858 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:00:10.965231   48858 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:00:10.965236   48858 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:00:10.965245   48858 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:00:10.965335   48858 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:00:10.965492   48858 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:00:10.965653   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:00:10.979963   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:00:11.019900   48858 start.go:303] post-start completed in 216.571993ms
	I0213 19:00:11.019973   48858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:00:11.020037   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:11.072270   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:11.166076   48858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:00:11.171183   48858 fix.go:56] fixHost completed within 2.032504021s
	I0213 19:00:11.171199   48858 start.go:83] releasing machines lock for "kubernetes-upgrade-470000", held for 2.032545679s
	I0213 19:00:11.171297   48858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470000
	I0213 19:00:11.224371   48858 ssh_runner.go:195] Run: cat /version.json
	I0213 19:00:11.224398   48858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:00:11.224447   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:11.224469   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:11.282066   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:11.282064   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:11.482651   48858 ssh_runner.go:195] Run: systemctl --version
	I0213 19:00:11.487698   48858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 19:00:11.493972   48858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 19:00:11.494030   48858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 19:00:11.508966   48858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 19:00:11.523862   48858 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0213 19:00:11.523881   48858 start.go:475] detecting cgroup driver to use...
	I0213 19:00:11.523892   48858 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:00:11.523998   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:00:11.551407   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 19:00:11.567325   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:00:11.583551   48858 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:00:11.583619   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:00:11.600308   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:00:11.616358   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:00:11.632287   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:00:11.648287   48858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:00:11.663261   48858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:00:11.679533   48858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:00:11.694159   48858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:00:11.709091   48858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:11.780880   48858 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:00:18.520811   48781 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.428468946s)
	I0213 19:00:18.520885   48781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 19:00:18.540037   48781 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 19:00:18.564155   48781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:00:18.580617   48781 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 19:00:18.683356   48781 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 19:00:18.749325   48781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:18.812168   48781 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 19:00:18.847653   48781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:00:18.863564   48781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:18.925896   48781 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 19:00:19.021175   48781 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 19:00:19.021268   48781 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 19:00:19.025814   48781 start.go:543] Will wait 60s for crictl version
	I0213 19:00:19.025867   48781 ssh_runner.go:195] Run: which crictl
	I0213 19:00:19.029491   48781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 19:00:19.062494   48781 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0213 19:00:19.062574   48781 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:00:19.097108   48781 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:00:19.157483   48781 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
	I0213 19:00:19.157612   48781 cli_runner.go:164] Run: docker exec -t running-upgrade-323000 dig +short host.docker.internal
	I0213 19:00:19.260849   48781 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:00:19.260952   48781 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:00:19.265444   48781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-323000
	I0213 19:00:19.316444   48781 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 19:00:19.316532   48781 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:00:19.347628   48781 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 19:00:19.347640   48781 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 19:00:19.347693   48781 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:00:19.361612   48781 ssh_runner.go:195] Run: which lz4
	I0213 19:00:19.365589   48781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 19:00:19.369216   48781 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0213 19:00:19.369242   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes)
	I0213 19:00:22.058206   48858 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.277358888s)
	I0213 19:00:22.058223   48858 start.go:475] detecting cgroup driver to use...
	I0213 19:00:22.058236   48858 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:00:22.058305   48858 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:00:22.085514   48858 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:00:22.085603   48858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:00:22.113372   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:00:22.157342   48858 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:00:22.163987   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:00:22.186003   48858 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:00:22.222686   48858 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:00:22.320567   48858 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:00:22.437996   48858 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:00:22.438109   48858 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:00:22.492752   48858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:22.571213   48858 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:00:22.904251   48858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 19:00:22.923206   48858 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 19:00:22.947442   48858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:00:22.967542   48858 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 19:00:23.044843   48858 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 19:00:23.118263   48858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:23.191720   48858 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 19:00:23.229079   48858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:00:23.250259   48858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:23.324528   48858 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 19:00:23.441298   48858 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 19:00:23.441438   48858 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 19:00:23.447268   48858 start.go:543] Will wait 60s for crictl version
	I0213 19:00:23.447337   48858 ssh_runner.go:195] Run: which crictl
	I0213 19:00:23.452526   48858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 19:00:23.520086   48858 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 19:00:23.520179   48858 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:00:23.547975   48858 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:00:23.596895   48858 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0213 19:00:23.596987   48858 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-470000 dig +short host.docker.internal
	I0213 19:00:23.731948   48858 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:00:23.732077   48858 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:00:23.737075   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:23.798450   48858 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 19:00:23.798536   48858 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:00:23.823070   48858 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:00:23.823092   48858 docker.go:615] Images already preloaded, skipping extraction
	I0213 19:00:23.823174   48858 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:00:23.846310   48858 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:00:23.846347   48858 cache_images.go:84] Images are preloaded, skipping loading
	I0213 19:00:23.846477   48858 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:00:23.901934   48858 cni.go:84] Creating CNI manager for ""
	I0213 19:00:23.901958   48858 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:00:23.901976   48858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:00:23.902000   48858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470000 NodeName:kubernetes-upgrade-470000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 19:00:23.902133   48858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-470000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:00:23.902208   48858 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-470000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:00:23.902303   48858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 19:00:23.921451   48858 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:00:23.921525   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:00:23.940500   48858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0213 19:00:23.976500   48858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 19:00:24.012880   48858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0213 19:00:24.048850   48858 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:00:24.054508   48858 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000 for IP: 192.168.67.2
	I0213 19:00:24.054538   48858 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:00:24.054759   48858 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:00:24.054834   48858 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:00:24.054930   48858 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key
	I0213 19:00:24.055027   48858 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key.c7fa3a9e
	I0213 19:00:24.055099   48858 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key
	I0213 19:00:24.055399   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:00:24.055448   48858 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:00:24.055461   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:00:24.055496   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:00:24.055536   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:00:24.055567   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:00:24.055646   48858 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:00:24.056221   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:00:24.105451   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 19:00:24.155026   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:00:24.204398   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:00:24.254337   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:00:24.303174   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:00:24.353106   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:00:24.404980   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:00:24.466331   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:00:24.521122   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:00:24.578965   48858 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:00:24.635841   48858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:00:24.679703   48858 ssh_runner.go:195] Run: openssl version
	I0213 19:00:24.690094   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:00:24.715452   48858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:00:24.722836   48858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:00:24.722960   48858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:00:24.732628   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:00:24.753271   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:00:24.773136   48858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:24.779742   48858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:24.779816   48858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:24.788927   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:00:24.811400   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:00:24.835617   48858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:00:24.843168   48858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:00:24.843254   48858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:00:24.853671   48858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:00:24.876360   48858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:00:24.882551   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:00:24.891055   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:00:24.900040   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:00:24.909899   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:00:24.920154   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:00:24.928813   48858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:00:24.937528   48858 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-470000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:00:24.937674   48858 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:00:24.960325   48858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:00:24.980799   48858 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:00:24.980821   48858 kubeadm.go:636] restartCluster start
	I0213 19:00:24.980893   48858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:00:25.000228   48858 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:25.000337   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:25.066356   48858 kubeconfig.go:92] found "kubernetes-upgrade-470000" server: "https://127.0.0.1:55247"
	I0213 19:00:25.067192   48858 kapi.go:59] client config for kubernetes-upgrade-470000: &rest.Config{Host:"https://127.0.0.1:55247", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key", CAFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 19:00:25.068118   48858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:00:25.087497   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:25.087576   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:25.108816   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:25.587661   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:25.587764   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:25.607078   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:26.087533   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:26.087663   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:26.109188   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:26.587724   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:26.587877   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:26.613459   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:27.087607   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:27.087731   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:27.112462   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:27.588463   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:27.588582   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:27.609219   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:28.088535   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:28.088651   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:28.106799   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:27.172458   48781 docker.go:649] Took 7.806946 seconds to copy over tarball
	I0213 19:00:27.172587   48781 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 19:00:29.960359   48781 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.787760005s)
	I0213 19:00:29.960380   48781 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 19:00:30.029838   48781 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:00:30.064852   48781 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0213 19:00:30.106439   48781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:00:30.502294   48781 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:00:28.588115   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:28.588196   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:28.610522   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:29.088282   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:29.088394   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:00:29.111709   48858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:29.587511   48858 api_server.go:166] Checking apiserver status ...
	I0213 19:00:29.587655   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:29.665360   48858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4430/cgroup
	W0213 19:00:29.690720   48858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4430/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:29.690813   48858 ssh_runner.go:195] Run: ls
	I0213 19:00:29.700414   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:31.971992   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:31.972034   48858 retry.go:31] will retry after 307.715122ms: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.280032   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:32.288728   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.288770   48858 retry.go:31] will retry after 289.493272ms: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.580334   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:32.585515   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.585537   48858 retry.go:31] will retry after 366.004176ms: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.951617   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:32.961557   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.961620   48858 retry.go:31] will retry after 604.208226ms: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:32.626400   48781 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.124099404s)
	I0213 19:00:32.626502   48781 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:00:32.660272   48781 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 19:00:32.660287   48781 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 19:00:32.660297   48781 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 19:00:32.665493   48781 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:00:32.666182   48781 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 19:00:32.666342   48781 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 19:00:32.666370   48781 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0213 19:00:32.666374   48781 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 19:00:32.666176   48781 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 19:00:32.666468   48781 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 19:00:32.666502   48781 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0213 19:00:32.671406   48781 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 19:00:32.671489   48781 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:00:32.671489   48781 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0213 19:00:32.671539   48781 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 19:00:32.671612   48781 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 19:00:32.671655   48781 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0213 19:00:32.672652   48781 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 19:00:32.672929   48781 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 19:00:34.551436   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0213 19:00:34.582137   48781 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0213 19:00:34.582176   48781 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 19:00:34.582261   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0213 19:00:34.603125   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0213 19:00:34.613572   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0213 19:00:34.636499   48781 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0213 19:00:34.636521   48781 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 19:00:34.636573   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0213 19:00:34.644349   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0213 19:00:34.668977   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0213 19:00:34.673639   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 19:00:34.673843   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0213 19:00:34.678119   48781 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0213 19:00:34.678149   48781 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0213 19:00:34.678214   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0213 19:00:34.680352   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0213 19:00:34.695121   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0213 19:00:34.763013   48781 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0213 19:00:34.763050   48781 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 19:00:34.763052   48781 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0213 19:00:34.763076   48781 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0213 19:00:34.763114   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0213 19:00:34.763151   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0213 19:00:34.763153   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 19:00:34.763351   48781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0213 19:00:34.776057   48781 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0213 19:00:34.776083   48781 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 19:00:34.776155   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0213 19:00:34.790725   48781 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0213 19:00:34.790781   48781 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 19:00:34.790911   48781 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0213 19:00:34.874163   48781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0213 19:00:34.874209   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0213 19:00:34.874225   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0213 19:00:34.874281   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0213 19:00:34.874459   48781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0213 19:00:34.886990   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0213 19:00:34.887151   48781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0213 19:00:34.897175   48781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0213 19:00:34.897259   48781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0213 19:00:34.897305   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0213 19:00:34.982624   48781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:00:35.000585   48781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0213 19:00:35.000633   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0213 19:00:35.089065   48781 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0213 19:00:35.089091   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0213 19:00:35.487454   48781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0213 19:00:35.689853   48781 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0213 19:00:35.689877   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0213 19:00:35.889307   48781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0213 19:00:33.566849   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:33.573644   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 200:
	ok
	I0213 19:00:33.584973   48858 system_pods.go:86] 5 kube-system pods found
	I0213 19:00:33.584990   48858 system_pods.go:89] "etcd-kubernetes-upgrade-470000" [c3b64ad4-e460-4ddb-b775-5504c8799d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:00:33.584996   48858 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-470000" [18e3a4c3-a872-48ad-b93d-d7b034f3fdf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:00:33.585008   48858 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-470000" [5227b1af-876a-429a-866a-a395a7d6abf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:00:33.585016   48858 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-470000" [a9f2780e-578e-4a2b-ac18-68ea4349f3de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:00:33.585024   48858 system_pods.go:89] "storage-provisioner" [eab7c1e5-353e-4dbf-a77d-da94356cccfa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 19:00:33.585037   48858 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0213 19:00:33.585045   48858 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:00:33.585117   48858 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:00:33.604495   48858 docker.go:483] Stopping containers: [1ecb839879b2 5fdd3712fd67 733dd13a502a 8e2a90cf5711 b81e7e6d3dfc 61a4a6f4e90e 7cd439d5c552 3b3c0b6cc8f2 cb03d49c6911 66e14d749bbb 9a478f0c28cd b5f468743569 d8df079f1506 25440ca71f20 7714ab4e6c14 0dcf334f1e61]
	I0213 19:00:33.604573   48858 ssh_runner.go:195] Run: docker stop 1ecb839879b2 5fdd3712fd67 733dd13a502a 8e2a90cf5711 b81e7e6d3dfc 61a4a6f4e90e 7cd439d5c552 3b3c0b6cc8f2 cb03d49c6911 66e14d749bbb 9a478f0c28cd b5f468743569 d8df079f1506 25440ca71f20 7714ab4e6c14 0dcf334f1e61
	I0213 19:00:34.821377   48858 ssh_runner.go:235] Completed: docker stop 1ecb839879b2 5fdd3712fd67 733dd13a502a 8e2a90cf5711 b81e7e6d3dfc 61a4a6f4e90e 7cd439d5c552 3b3c0b6cc8f2 cb03d49c6911 66e14d749bbb 9a478f0c28cd b5f468743569 d8df079f1506 25440ca71f20 7714ab4e6c14 0dcf334f1e61: (1.216785625s)
	I0213 19:00:34.821459   48858 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:00:34.888022   48858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:00:34.915619   48858 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Feb 14 02:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Feb 14 02:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Feb 14 02:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Feb 14 02:57 /etc/kubernetes/scheduler.conf
	
	I0213 19:00:34.915725   48858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 19:00:34.956608   48858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 19:00:34.987091   48858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 19:00:35.056395   48858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 19:00:35.086802   48858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:00:35.115351   48858 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:00:35.115385   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:35.189460   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:36.339229   48858 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149751318s)
	I0213 19:00:36.339257   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:36.497274   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:36.579628   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:36.678769   48858 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:00:36.678890   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:37.178998   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:37.679051   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:38.179189   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:38.257691   48858 api_server.go:72] duration metric: took 1.578933314s to wait for apiserver process to appear ...
	I0213 19:00:38.257705   48858 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:00:38.257738   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:37.204933   48781 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0213 19:00:37.204986   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0213 19:00:37.451575   48781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0213 19:00:37.451669   48781 cache_images.go:92] LoadImages completed in 4.791396272s
	W0213 19:00:37.451758   48781 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0213 19:00:37.451914   48781 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:00:37.542845   48781 cni.go:84] Creating CNI manager for ""
	I0213 19:00:37.542865   48781 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:00:37.542913   48781 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:00:37.542930   48781 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-323000 NodeName:running-upgrade-323000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 19:00:37.543031   48781 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-323000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:00:37.543125   48781 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-323000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:00:37.543248   48781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0213 19:00:37.559735   48781 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:00:37.559863   48781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:00:37.581904   48781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0213 19:00:37.614960   48781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 19:00:37.641561   48781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0213 19:00:37.675875   48781 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:00:37.685082   48781 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000 for IP: 192.168.76.2
	I0213 19:00:37.685106   48781 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:00:37.685385   48781 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:00:37.685518   48781 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:00:37.685667   48781 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/client.key
	I0213 19:00:37.685757   48781 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/apiserver.key.31bdca25
	I0213 19:00:37.685830   48781 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/proxy-client.key
	I0213 19:00:37.686040   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:00:37.686074   48781 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:00:37.686085   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:00:37.686121   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:00:37.686212   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:00:37.686274   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:00:37.686392   48781 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:00:37.687096   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:00:37.731261   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 19:00:37.769309   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:00:37.822943   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:00:37.862701   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:00:37.914181   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:00:37.950569   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:00:37.999123   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:00:38.043331   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:00:38.088425   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:00:38.132176   48781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:00:38.175037   48781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:00:38.209809   48781 ssh_runner.go:195] Run: openssl version
	I0213 19:00:38.217027   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:00:38.233179   48781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:38.238215   48781 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:38.238298   48781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:00:38.244160   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:00:38.257750   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:00:38.276448   48781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:00:38.281689   48781 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:00:38.281753   48781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:00:38.288976   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:00:38.308859   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:00:38.326815   48781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:00:38.332991   48781 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:00:38.333036   48781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:00:38.338753   48781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:00:38.352814   48781 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:00:38.357069   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:00:38.363133   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:00:38.369595   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:00:38.376197   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:00:38.382891   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:00:38.389377   48781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:00:38.395628   48781 kubeadm.go:404] StartCluster: {Name:running-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:00:38.395765   48781 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:00:38.432515   48781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:00:38.447037   48781 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:00:38.447069   48781 kubeadm.go:636] restartCluster start
	I0213 19:00:38.447185   48781 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:00:38.462343   48781 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:38.462428   48781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-323000
	I0213 19:00:38.526694   48781 kubeconfig.go:135] verify returned: extract IP: "running-upgrade-323000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:00:38.526882   48781 kubeconfig.go:146] "running-upgrade-323000" context is missing from /Users/jenkins/minikube-integration/18165-38421/kubeconfig - will repair!
	I0213 19:00:38.527248   48781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:00:38.528019   48781 kapi.go:59] client config for running-upgrade-323000: &rest.Config{Host:"https://127.0.0.1:55231", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/running-upgrade-323000/client.key", CAFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 19:00:38.528685   48781 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:00:38.543135   48781 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-02-14 02:59:41.379245163 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-02-14 03:00:37.667773448 +0000
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-323000"
	   kubeletExtraArgs:
	     node-ip: 192.168.76.2
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0213 19:00:38.543156   48781 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:00:38.543228   48781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:00:38.578192   48781 docker.go:483] Stopping containers: [776349d95bc7 13bfbd7115da ffc8fee492f8 b2809601d522 c31fa1d5b2e6 74b58ee79641 0ce79bb06a24 64d18dca6abc 465a3201d344 4c0cc59d99e5 d98d79fe073c e476c8ee9aaf 2c78a7e53a07 363337b2ac16 ea6ed2541533 aafd0cfa15f1 749dce30e8e7 a9d0eb62fcd5 9ffec2b9474f 3db0657594d7 a3afbdd154f2 d1d118c7a291 9e503e0cbbc3 78bfdb803ba2]
	I0213 19:00:38.578331   48781 ssh_runner.go:195] Run: docker stop 776349d95bc7 13bfbd7115da ffc8fee492f8 b2809601d522 c31fa1d5b2e6 74b58ee79641 0ce79bb06a24 64d18dca6abc 465a3201d344 4c0cc59d99e5 d98d79fe073c e476c8ee9aaf 2c78a7e53a07 363337b2ac16 ea6ed2541533 aafd0cfa15f1 749dce30e8e7 a9d0eb62fcd5 9ffec2b9474f 3db0657594d7 a3afbdd154f2 d1d118c7a291 9e503e0cbbc3 78bfdb803ba2
	I0213 19:00:38.766208   48781 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:00:38.801475   48781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:00:38.817297   48781 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 14 02:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 14 02:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Feb 14 02:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 02:59 /etc/kubernetes/scheduler.conf
	
	I0213 19:00:38.817370   48781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 19:00:38.832083   48781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 19:00:38.845825   48781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 19:00:38.859646   48781 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:38.859701   48781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 19:00:38.873384   48781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 19:00:38.889920   48781 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:00:38.890028   48781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 19:00:38.907893   48781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:00:38.921915   48781 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:00:38.921930   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:38.970304   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:39.726035   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:39.949401   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:40.020254   48781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:40.164281   48781 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:00:40.164389   48781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:40.664527   48781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:41.164488   48781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:41.664622   48781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:40.761041   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 19:00:40.762137   48858 api_server.go:103] status: https://127.0.0.1:55247/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 19:00:40.762156   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:40.774202   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:00:40.774236   48858 api_server.go:103] status: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:41.258704   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:41.264058   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:00:41.264072   48858 api_server.go:103] status: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:41.759219   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:41.767498   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:00:41.767527   48858 api_server.go:103] status: https://127.0.0.1:55247/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:00:42.258539   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:42.265333   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 200:
	ok
	I0213 19:00:42.273814   48858 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 19:00:42.273831   48858 api_server.go:131] duration metric: took 4.016147056s to wait for apiserver health ...
	I0213 19:00:42.273853   48858 cni.go:84] Creating CNI manager for ""
	I0213 19:00:42.273887   48858 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:00:42.312548   48858 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 19:00:42.351591   48858 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 19:00:42.375674   48858 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 19:00:42.413327   48858 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:00:42.421936   48858 system_pods.go:59] 5 kube-system pods found
	I0213 19:00:42.421953   48858 system_pods.go:61] "etcd-kubernetes-upgrade-470000" [c3b64ad4-e460-4ddb-b775-5504c8799d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:00:42.421958   48858 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-470000" [18e3a4c3-a872-48ad-b93d-d7b034f3fdf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:00:42.421967   48858 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-470000" [5227b1af-876a-429a-866a-a395a7d6abf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:00:42.421973   48858 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-470000" [a9f2780e-578e-4a2b-ac18-68ea4349f3de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:00:42.421977   48858 system_pods.go:61] "storage-provisioner" [eab7c1e5-353e-4dbf-a77d-da94356cccfa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 19:00:42.421983   48858 system_pods.go:74] duration metric: took 8.643462ms to wait for pod list to return data ...
	I0213 19:00:42.421990   48858 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:00:42.424979   48858 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:00:42.424994   48858 node_conditions.go:123] node cpu capacity is 12
	I0213 19:00:42.425005   48858 node_conditions.go:105] duration metric: took 3.011547ms to run NodePressure ...
	I0213 19:00:42.425015   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:00:42.692447   48858 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 19:00:42.704976   48858 ops.go:34] apiserver oom_adj: -16
	I0213 19:00:42.705051   48858 kubeadm.go:640] restartCluster took 17.724284716s
	I0213 19:00:42.705073   48858 kubeadm.go:406] StartCluster complete in 17.767675165s
	I0213 19:00:42.705095   48858 settings.go:142] acquiring lock: {Name:mke46562c9f92468d93bd6cd756238f74ba38936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:00:42.705251   48858 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:00:42.706145   48858 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:00:42.706576   48858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 19:00:42.706632   48858 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 19:00:42.706735   48858 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-470000"
	I0213 19:00:42.706752   48858 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-470000"
	I0213 19:00:42.706757   48858 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-470000"
	W0213 19:00:42.706769   48858 addons.go:243] addon storage-provisioner should already be in state true
	I0213 19:00:42.706779   48858 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-470000"
	I0213 19:00:42.706801   48858 config.go:182] Loaded profile config "kubernetes-upgrade-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:00:42.706826   48858 host.go:66] Checking if "kubernetes-upgrade-470000" exists ...
	I0213 19:00:42.707166   48858 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 19:00:42.707311   48858 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 19:00:42.707296   48858 kapi.go:59] client config for kubernetes-upgrade-470000: &rest.Config{Host:"https://127.0.0.1:55247", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key", CAFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 19:00:42.716624   48858 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-470000" context rescaled to 1 replicas
	I0213 19:00:42.716682   48858 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 19:00:42.762593   48858 out.go:177] * Verifying Kubernetes components...
	I0213 19:00:42.821174   48858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:00:42.850016   48858 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:00:42.829685   48858 kapi.go:59] client config for kubernetes-upgrade-470000: &rest.Config{Host:"https://127.0.0.1:55247", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubernetes-upgrade-470000/client.key", CAFile:"/Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 19:00:42.840696   48858 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 19:00:42.844181   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:42.850212   48858 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-470000"
	W0213 19:00:42.871004   48858 addons.go:243] addon default-storageclass should already be in state true
	I0213 19:00:42.871011   48858 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:00:42.871022   48858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 19:00:42.871036   48858 host.go:66] Checking if "kubernetes-upgrade-470000" exists ...
	I0213 19:00:42.871097   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:42.873870   48858 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470000 --format={{.State.Status}}
	I0213 19:00:42.941353   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:42.941364   48858 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:00:42.941473   48858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:00:42.942259   48858 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 19:00:42.942269   48858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 19:00:42.942336   48858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470000
	I0213 19:00:42.964393   48858 api_server.go:72] duration metric: took 247.678668ms to wait for apiserver process to appear ...
	I0213 19:00:42.964433   48858 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:00:42.964459   48858 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55247/healthz ...
	I0213 19:00:42.971203   48858 api_server.go:279] https://127.0.0.1:55247/healthz returned 200:
	ok
	I0213 19:00:42.973160   48858 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 19:00:42.973181   48858 api_server.go:131] duration metric: took 8.738651ms to wait for apiserver health ...
	I0213 19:00:42.973194   48858 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:00:42.979136   48858 system_pods.go:59] 5 kube-system pods found
	I0213 19:00:42.979158   48858 system_pods.go:61] "etcd-kubernetes-upgrade-470000" [c3b64ad4-e460-4ddb-b775-5504c8799d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:00:42.979165   48858 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-470000" [18e3a4c3-a872-48ad-b93d-d7b034f3fdf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:00:42.979180   48858 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-470000" [5227b1af-876a-429a-866a-a395a7d6abf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:00:42.979188   48858 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-470000" [a9f2780e-578e-4a2b-ac18-68ea4349f3de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:00:42.979193   48858 system_pods.go:61] "storage-provisioner" [eab7c1e5-353e-4dbf-a77d-da94356cccfa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 19:00:42.979201   48858 system_pods.go:74] duration metric: took 6.000771ms to wait for pod list to return data ...
	I0213 19:00:42.979208   48858 kubeadm.go:581] duration metric: took 262.498833ms to wait for : map[apiserver:true system_pods:true] ...
	I0213 19:00:42.979218   48858 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:00:42.982923   48858 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:00:42.982937   48858 node_conditions.go:123] node cpu capacity is 12
	I0213 19:00:42.982958   48858 node_conditions.go:105] duration metric: took 3.725767ms to run NodePressure ...
	I0213 19:00:42.982967   48858 start.go:228] waiting for startup goroutines ...
	I0213 19:00:43.005496   48858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/kubernetes-upgrade-470000/id_rsa Username:docker}
	I0213 19:00:43.070636   48858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:00:43.130626   48858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 19:00:43.628688   48858 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 19:00:43.670309   48858 addons.go:505] enable addons completed in 963.713262ms: enabled=[storage-provisioner default-storageclass]
	I0213 19:00:43.670337   48858 start.go:233] waiting for cluster config update ...
	I0213 19:00:43.670354   48858 start.go:242] writing updated cluster config ...
	I0213 19:00:43.670757   48858 ssh_runner.go:195] Run: rm -f paused
	I0213 19:00:43.719617   48858 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 19:00:43.740193   48858 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-470000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:00:23 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:23Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 14 03:00:23 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 14 03:00:23 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 14 03:00:23 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:23Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 14 03:00:23 kubernetes-upgrade-470000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 14 03:00:28 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3b3c0b6cc8f2d73395b604bfdd0d04b5e5bbfe7fda13bccc0faa903143d2f194/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:28 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61a4a6f4e90e702dea72c166c62d72bea66a59281849ea07abcae2af0301e23d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:28 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7cd439d5c552cd7e089be70ab26e3c961e5359ed3a3e45240f595bb67ce9e872/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:28 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b81e7e6d3dfc8edbf97d230fb5aa6e949e5bbd6f129d8adde6c9be818d579b61/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.668622096Z" level=info msg="ignoring event" container=7cd439d5c552cd7e089be70ab26e3c961e5359ed3a3e45240f595bb67ce9e872 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.668652547Z" level=info msg="ignoring event" container=3b3c0b6cc8f2d73395b604bfdd0d04b5e5bbfe7fda13bccc0faa903143d2f194 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.669936922Z" level=info msg="ignoring event" container=b81e7e6d3dfc8edbf97d230fb5aa6e949e5bbd6f129d8adde6c9be818d579b61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.670012203Z" level=info msg="ignoring event" container=733dd13a502ad3f75780bae42c7bc7093fc1b4f8c124add3cf6ff100e15c9b07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.671504877Z" level=info msg="ignoring event" container=61a4a6f4e90e702dea72c166c62d72bea66a59281849ea07abcae2af0301e23d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.671554107Z" level=info msg="ignoring event" container=8e2a90cf57113c5acd60c91e37d9b9e43a89b53481cbf0d39c8e47235cad28fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:33 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:33.757566162Z" level=info msg="ignoring event" container=1ecb839879b28dba5432778f95c25086ac2bb3f6356a3880fad052ad3a089f1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:34 kubernetes-upgrade-470000 dockerd[3547]: time="2024-02-14T03:00:34.791731048Z" level=info msg="ignoring event" container=5fdd3712fd6735cb00b7f5d23fd22037840d550e9666e238629d8627174cef66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/56ff988837baac3c1e80830e49295a51942709803e85847f7ca021bd64e40136/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: W0214 03:00:35.066729    3764 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d2a5ea03fc12780f8fbc837533d9a84243e138ad1f41abc43c585f650dba3cc/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: W0214 03:00:35.071304    3764 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1df55a35266fe5c72bd97071d77dee5d3bdf17f3403c024cfa0924daf9a56870/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 14 03:00:35 kubernetes-upgrade-470000 cri-dockerd[3764]: W0214 03:00:35.082272    3764 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 14 03:00:37 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:37Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"0dcf334f1e61225228fa5eb9f1313b003e7642243678055e8d28f2efb860b86a\". Proceed without further sandbox information."
	Feb 14 03:00:37 kubernetes-upgrade-470000 cri-dockerd[3764]: time="2024-02-14T03:00:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dede236dd3aae52d393852a1292720c666c1b64f361c9c440cdb3bda29cd65bd/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f6a9a2e7af2cd       bbb47a0f83324       8 seconds ago       Running             kube-apiserver            2                   1df55a35266fe       kube-apiserver-kubernetes-upgrade-470000
	cf1672265a500       a0eed15eed449       8 seconds ago       Running             etcd                      2                   dede236dd3aae       etcd-kubernetes-upgrade-470000
	aae599e7b5b7a       4270645ed6b7a       8 seconds ago       Running             kube-scheduler            2                   9d2a5ea03fc12       kube-scheduler-kubernetes-upgrade-470000
	18ecfa8ed388c       d4e01cdf63970       8 seconds ago       Running             kube-controller-manager   2                   56ff988837baa       kube-controller-manager-kubernetes-upgrade-470000
	1ecb839879b28       a0eed15eed449       16 seconds ago      Exited              etcd                      1                   3b3c0b6cc8f2d       etcd-kubernetes-upgrade-470000
	5fdd3712fd673       bbb47a0f83324       16 seconds ago      Exited              kube-apiserver            1                   61a4a6f4e90e7       kube-apiserver-kubernetes-upgrade-470000
	733dd13a502ad       d4e01cdf63970       16 seconds ago      Exited              kube-controller-manager   1                   b81e7e6d3dfc8       kube-controller-manager-kubernetes-upgrade-470000
	8e2a90cf57113       4270645ed6b7a       17 seconds ago      Exited              kube-scheduler            1                   7cd439d5c552c       kube-scheduler-kubernetes-upgrade-470000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-470000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-470000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 02:59:59 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-470000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:00:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:00:40 +0000   Wed, 14 Feb 2024 02:59:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:00:40 +0000   Wed, 14 Feb 2024 02:59:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:00:40 +0000   Wed, 14 Feb 2024 02:59:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 03:00:40 +0000   Wed, 14 Feb 2024 03:00:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-470000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067672Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067672Ki
	  pods:               110
	System Info:
	  Machine ID:                 71b2a67cfd054a769fd0bcb732e726af
	  System UUID:                71b2a67cfd054a769fd0bcb732e726af
	  Boot ID:                    f9e2bb32-14d2-464f-a920-a74ec4f29d93
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-470000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kube-apiserver-kubernetes-upgrade-470000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-470000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-kubernetes-upgrade-470000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 49s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 49s)  kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-470000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Feb14 02:09] systemd[1534]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [1ecb839879b2] <==
	{"level":"info","ts":"2024-02-14T03:00:29.47666Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T03:00:30.960396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:30.960468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:30.960485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-14T03:00:30.960493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:30.960511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:30.960518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:30.960523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:30.991472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-470000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T03:00:30.991844Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:30.992394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:30.992619Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:30.992686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:31.00265Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T03:00:31.002942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-14T03:00:33.637137Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-14T03:00:33.637228Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-470000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-02-14T03:00:33.637389Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T03:00:33.637538Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T03:00:33.662651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-14T03:00:33.66276Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-14T03:00:33.662865Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-02-14T03:00:33.66541Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:33.665547Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:33.665586Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-470000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> etcd [cf1672265a50] <==
	{"level":"info","ts":"2024-02-14T03:00:37.686492Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:37.686577Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:00:37.686781Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-14T03:00:37.687298Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:37.687404Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-14T03:00:37.687024Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-14T03:00:37.687537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-14T03:00:37.688162Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-14T03:00:37.687792Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-14T03:00:37.689875Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:37.689934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:00:39.373263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:39.373323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:39.373383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-14T03:00:39.373393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-02-14T03:00:39.373397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-14T03:00:39.373403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-02-14T03:00:39.373408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-02-14T03:00:39.375075Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-470000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T03:00:39.375162Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:39.375224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:00:39.375868Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:39.37597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T03:00:39.380943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T03:00:39.383499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 03:00:46 up  1:39,  0 users,  load average: 7.12, 5.12, 4.52
	Linux kubernetes-upgrade-470000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [5fdd3712fd67] <==
	W0214 03:00:34.642979       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642980       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642817       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642991       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642930       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643022       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643030       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643034       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643029       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642904       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642930       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.642996       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643002       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643196       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643002       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643205       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643232       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643246       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643259       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643269       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643274       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643279       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643280       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.643345       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0214 03:00:34.657445       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f6a9a2e7af2c] <==
	I0214 03:00:40.684191       1 controller.go:85] Starting OpenAPI V3 controller
	I0214 03:00:40.684246       1 naming_controller.go:291] Starting NamingConditionController
	I0214 03:00:40.684297       1 establishing_controller.go:76] Starting EstablishingController
	I0214 03:00:40.684474       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0214 03:00:40.684540       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0214 03:00:40.684597       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0214 03:00:40.767509       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 03:00:40.856157       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 03:00:40.856266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:00:40.856455       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0214 03:00:40.856467       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0214 03:00:40.856570       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:00:40.856839       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0214 03:00:40.856851       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 03:00:40.856871       1 aggregator.go:165] initial CRD sync complete...
	I0214 03:00:40.856878       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 03:00:40.856884       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 03:00:40.856889       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:00:40.857582       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 03:00:41.688825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 03:00:42.514751       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0214 03:00:42.521125       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0214 03:00:42.543682       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0214 03:00:42.558718       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 03:00:42.565076       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [18ecfa8ed388] <==
	I0214 03:00:42.705899       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0214 03:00:42.706132       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0214 03:00:42.706432       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0214 03:00:42.709565       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0214 03:00:42.709921       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0214 03:00:42.709977       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0214 03:00:42.764686       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0214 03:00:42.764867       1 gc_controller.go:101] "Starting GC controller"
	I0214 03:00:42.764877       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0214 03:00:42.767562       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0214 03:00:42.767714       1 job_controller.go:224] "Starting job controller"
	I0214 03:00:42.767774       1 shared_informer.go:311] Waiting for caches to sync for job
	I0214 03:00:42.769792       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0214 03:00:42.769856       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0214 03:00:42.769866       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0214 03:00:42.779515       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0214 03:00:42.779683       1 horizontal.go:200] "Starting HPA controller"
	I0214 03:00:42.779749       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0214 03:00:42.782471       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0214 03:00:42.782759       1 stateful_set.go:161] "Starting stateful set controller"
	I0214 03:00:42.782820       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0214 03:00:42.784897       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0214 03:00:42.785091       1 ttl_controller.go:124] "Starting TTL controller"
	I0214 03:00:42.785105       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0214 03:00:42.796769       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [733dd13a502a] <==
	I0214 03:00:29.984076       1 serving.go:380] Generated self-signed cert in-memory
	I0214 03:00:30.355451       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0214 03:00:30.355494       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:00:30.356665       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0214 03:00:30.356831       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 03:00:30.356913       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0214 03:00:30.356975       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-scheduler [8e2a90cf5711] <==
	I0214 03:00:29.963397       1 serving.go:380] Generated self-signed cert in-memory
	W0214 03:00:31.966683       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 03:00:31.966745       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 03:00:31.966758       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 03:00:31.966768       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 03:00:31.977026       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0214 03:00:31.977079       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:00:31.978864       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 03:00:31.979006       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 03:00:31.979037       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:00:31.979183       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 03:00:32.079347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:00:33.640478       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 03:00:33.640895       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0214 03:00:33.641207       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0214 03:00:33.643475       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aae599e7b5b7] <==
	I0214 03:00:38.078289       1 serving.go:380] Generated self-signed cert in-memory
	W0214 03:00:40.760265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 03:00:40.760296       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 03:00:40.760308       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 03:00:40.760316       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 03:00:40.775565       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0214 03:00:40.775629       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 03:00:40.777511       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 03:00:40.777544       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:00:40.778617       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 03:00:40.778758       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 03:00:40.880263       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 03:00:36 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:36.961716    4968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0dea07fb485120c1fee60c5f6d734b0-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-470000\" (UID: \"d0dea07fb485120c1fee60c5f6d734b0\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-470000"
	Feb 14 03:00:36 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:36.979073    4968 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-470000"
	Feb 14 03:00:36 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:36.979710    4968 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-470000"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:37.103501    4968 scope.go:117] "RemoveContainer" containerID="733dd13a502ad3f75780bae42c7bc7093fc1b4f8c124add3cf6ff100e15c9b07"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:37.165334    4968 scope.go:117] "RemoveContainer" containerID="8e2a90cf57113c5acd60c91e37d9b9e43a89b53481cbf0d39c8e47235cad28fd"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:37.261687    4968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-470000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:37.390567    4968 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-470000"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:37.390963    4968 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-470000"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: W0214 03:00:37.556107    4968 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-470000&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:37.556298    4968 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-470000&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: W0214 03:00:37.589715    4968 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:37.589779    4968 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: W0214 03:00:37.655750    4968 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:37.655819    4968 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:37.873756    4968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a4a6f4e90e702dea72c166c62d72bea66a59281849ea07abcae2af0301e23d"
	Feb 14 03:00:37 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:37.882521    4968 scope.go:117] "RemoveContainer" containerID="5fdd3712fd6735cb00b7f5d23fd22037840d550e9666e238629d8627174cef66"
	Feb 14 03:00:38 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:38.062693    4968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-470000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Feb 14 03:00:38 kubernetes-upgrade-470000 kubelet[4968]: W0214 03:00:38.158672    4968 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:38 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:38.158754    4968 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 14 03:00:38 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:38.201030    4968 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-470000"
	Feb 14 03:00:40 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:40.870057    4968 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-470000"
	Feb 14 03:00:40 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:40.870364    4968 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-470000"
	Feb 14 03:00:40 kubernetes-upgrade-470000 kubelet[4968]: E0214 03:00:40.906497    4968 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-470000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-470000"
	Feb 14 03:00:41 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:41.658241    4968 apiserver.go:52] "Watching apiserver"
	Feb 14 03:00:41 kubernetes-upgrade-470000 kubelet[4968]: I0214 03:00:41.759552    4968 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-470000 -n kubernetes-upgrade-470000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-470000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-470000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-470000 describe pod storage-provisioner: exit status 1 (60.533678ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-470000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-470000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-470000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-470000: (2.711784715s)
--- FAIL: TestKubernetesUpgrade (336.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (257.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m16.928427062s)

                                                
                                                
-- stdout --
	* [old-k8s-version-187000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-187000 in cluster old-k8s-version-187000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 19:12:00.561822   55174 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:12:00.562223   55174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:12:00.562236   55174 out.go:304] Setting ErrFile to fd 2...
	I0213 19:12:00.562244   55174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:12:00.562538   55174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:12:00.564779   55174 out.go:298] Setting JSON to false
	I0213 19:12:00.588457   55174 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17179,"bootTime":1707863141,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:12:00.588569   55174 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:12:00.610637   55174 out.go:177] * [old-k8s-version-187000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:12:00.673359   55174 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:12:00.652499   55174 notify.go:220] Checking for updates...
	I0213 19:12:00.715417   55174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:12:00.757224   55174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:12:00.778448   55174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:12:00.820111   55174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:12:00.878377   55174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:12:00.899704   55174 config.go:182] Loaded profile config "kubenet-210000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:12:00.899797   55174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:12:00.958774   55174 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:12:00.958932   55174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:12:01.081139   55174 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:12:01.064951267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:12:01.103033   55174 out.go:177] * Using the docker driver based on user configuration
	I0213 19:12:01.160777   55174 start.go:298] selected driver: docker
	I0213 19:12:01.160790   55174 start.go:902] validating driver "docker" against <nil>
	I0213 19:12:01.160799   55174 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:12:01.164273   55174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:12:01.277413   55174 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:12:01.267598741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:12:01.277611   55174 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 19:12:01.277810   55174 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 19:12:01.299124   55174 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 19:12:01.320142   55174 cni.go:84] Creating CNI manager for ""
	I0213 19:12:01.320174   55174 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 19:12:01.320189   55174 start_flags.go:321] config:
	{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:12:01.341773   55174 out.go:177] * Starting control plane node old-k8s-version-187000 in cluster old-k8s-version-187000
	I0213 19:12:01.384005   55174 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:12:01.404809   55174 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:12:01.462980   55174 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 19:12:01.463051   55174 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:12:01.463056   55174 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 19:12:01.463086   55174 cache.go:56] Caching tarball of preloaded images
	I0213 19:12:01.463247   55174 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:12:01.463261   55174 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 19:12:01.463890   55174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0213 19:12:01.464067   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/config.json: {Name:mkbc6ac6a942e0a66e41515601eaea1747f53b88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:01.516381   55174 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:12:01.516418   55174 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:12:01.516436   55174 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:12:01.516472   55174 start.go:365] acquiring machines lock for old-k8s-version-187000: {Name:mk0547224fc7a975c28768405bd89305d57998ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:12:01.516605   55174 start.go:369] acquired machines lock for "old-k8s-version-187000" in 120.12µs
	I0213 19:12:01.516634   55174 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 19:12:01.516710   55174 start.go:125] createHost starting for "" (driver="docker")
	I0213 19:12:01.538019   55174 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0213 19:12:01.538295   55174 start.go:159] libmachine.API.Create for "old-k8s-version-187000" (driver="docker")
	I0213 19:12:01.538324   55174 client.go:168] LocalClient.Create starting
	I0213 19:12:01.538488   55174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem
	I0213 19:12:01.538562   55174 main.go:141] libmachine: Decoding PEM data...
	I0213 19:12:01.538591   55174 main.go:141] libmachine: Parsing certificate...
	I0213 19:12:01.538661   55174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem
	I0213 19:12:01.538717   55174 main.go:141] libmachine: Decoding PEM data...
	I0213 19:12:01.538729   55174 main.go:141] libmachine: Parsing certificate...
	I0213 19:12:01.559321   55174 cli_runner.go:164] Run: docker network inspect old-k8s-version-187000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 19:12:01.610290   55174 cli_runner.go:211] docker network inspect old-k8s-version-187000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 19:12:01.610391   55174 network_create.go:281] running [docker network inspect old-k8s-version-187000] to gather additional debugging logs...
	I0213 19:12:01.610414   55174 cli_runner.go:164] Run: docker network inspect old-k8s-version-187000
	W0213 19:12:01.661115   55174 cli_runner.go:211] docker network inspect old-k8s-version-187000 returned with exit code 1
	I0213 19:12:01.661149   55174 network_create.go:284] error running [docker network inspect old-k8s-version-187000]: docker network inspect old-k8s-version-187000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-187000 not found
	I0213 19:12:01.661165   55174 network_create.go:286] output of [docker network inspect old-k8s-version-187000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-187000 not found
	
	** /stderr **
	I0213 19:12:01.661304   55174 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 19:12:01.718523   55174 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 19:12:01.720153   55174 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 19:12:01.721501   55174 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 19:12:01.722845   55174 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 19:12:01.723211   55174 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002414340}
	I0213 19:12:01.723227   55174 network_create.go:124] attempt to create docker network old-k8s-version-187000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0213 19:12:01.723294   55174 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-187000 old-k8s-version-187000
	I0213 19:12:01.811031   55174 network_create.go:108] docker network old-k8s-version-187000 192.168.85.0/24 created
	I0213 19:12:01.811073   55174 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-187000" container
	I0213 19:12:01.811187   55174 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 19:12:01.863382   55174 cli_runner.go:164] Run: docker volume create old-k8s-version-187000 --label name.minikube.sigs.k8s.io=old-k8s-version-187000 --label created_by.minikube.sigs.k8s.io=true
	I0213 19:12:01.917347   55174 oci.go:103] Successfully created a docker volume old-k8s-version-187000
	I0213 19:12:01.917488   55174 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-187000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187000 --entrypoint /usr/bin/test -v old-k8s-version-187000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 19:12:02.309967   55174 oci.go:107] Successfully prepared a docker volume old-k8s-version-187000
	I0213 19:12:02.310012   55174 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 19:12:02.310027   55174 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 19:12:02.310149   55174 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 19:12:04.415675   55174 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.105470822s)
	I0213 19:12:04.415703   55174 kic.go:203] duration metric: took 2.105690 seconds to extract preloaded images to volume
	I0213 19:12:04.415831   55174 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 19:12:04.524333   55174 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-187000 --name old-k8s-version-187000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-187000 --network old-k8s-version-187000 --ip 192.168.85.2 --volume old-k8s-version-187000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 19:12:04.795219   55174 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Running}}
	I0213 19:12:04.851643   55174 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Status}}
	I0213 19:12:04.909545   55174 cli_runner.go:164] Run: docker exec old-k8s-version-187000 stat /var/lib/dpkg/alternatives/iptables
	I0213 19:12:05.015130   55174 oci.go:144] the created container "old-k8s-version-187000" has a running status.
	I0213 19:12:05.015197   55174 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa...
	I0213 19:12:05.151974   55174 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 19:12:05.247369   55174 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Status}}
	I0213 19:12:05.305725   55174 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 19:12:05.305782   55174 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-187000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 19:12:05.405766   55174 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Status}}
	I0213 19:12:05.462884   55174 machine.go:88] provisioning docker machine ...
	I0213 19:12:05.462934   55174 ubuntu.go:169] provisioning hostname "old-k8s-version-187000"
	I0213 19:12:05.463036   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:05.517571   55174 main.go:141] libmachine: Using SSH client type: native
	I0213 19:12:05.517897   55174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56980 <nil> <nil>}
	I0213 19:12:05.517910   55174 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-187000 && echo "old-k8s-version-187000" | sudo tee /etc/hostname
	I0213 19:12:05.682672   55174 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187000
	
	I0213 19:12:05.682809   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:05.735388   55174 main.go:141] libmachine: Using SSH client type: native
	I0213 19:12:05.735683   55174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56980 <nil> <nil>}
	I0213 19:12:05.735696   55174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-187000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-187000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-187000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:12:05.873452   55174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:12:05.873471   55174 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:12:05.873488   55174 ubuntu.go:177] setting up certificates
	I0213 19:12:05.873495   55174 provision.go:83] configureAuth start
	I0213 19:12:05.873565   55174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:12:05.925098   55174 provision.go:138] copyHostCerts
	I0213 19:12:05.925184   55174 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:12:05.925193   55174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:12:05.925316   55174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:12:05.925530   55174 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:12:05.925537   55174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:12:05.925615   55174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:12:05.925803   55174 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:12:05.925812   55174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:12:05.925885   55174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:12:05.926033   55174 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-187000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-187000]
	I0213 19:12:06.027536   55174 provision.go:172] copyRemoteCerts
	I0213 19:12:06.027740   55174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:12:06.027796   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:06.080361   55174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56980 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:12:06.187382   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:12:06.236736   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 19:12:06.277231   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 19:12:06.318807   55174 provision.go:86] duration metric: configureAuth took 445.282721ms
	I0213 19:12:06.318821   55174 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:12:06.319006   55174 config.go:182] Loaded profile config "old-k8s-version-187000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 19:12:06.319081   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:06.372540   55174 main.go:141] libmachine: Using SSH client type: native
	I0213 19:12:06.372873   55174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56980 <nil> <nil>}
	I0213 19:12:06.372888   55174 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:12:06.509951   55174 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:12:06.509968   55174 ubuntu.go:71] root file system type: overlay
	I0213 19:12:06.510061   55174 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:12:06.510153   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:06.561223   55174 main.go:141] libmachine: Using SSH client type: native
	I0213 19:12:06.561537   55174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56980 <nil> <nil>}
	I0213 19:12:06.561586   55174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:12:06.722471   55174 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:12:06.722568   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:06.775752   55174 main.go:141] libmachine: Using SSH client type: native
	I0213 19:12:06.776060   55174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56980 <nil> <nil>}
	I0213 19:12:06.776074   55174 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:12:07.419890   55174 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 03:12:06.717243524 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 19:12:07.419920   55174 machine.go:91] provisioned docker machine in 1.957024412s
	I0213 19:12:07.419928   55174 client.go:171] LocalClient.Create took 5.881638013s
	I0213 19:12:07.419944   55174 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-187000" took 5.88168893s
	I0213 19:12:07.419951   55174 start.go:300] post-start starting for "old-k8s-version-187000" (driver="docker")
	I0213 19:12:07.419958   55174 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:12:07.420036   55174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:12:07.420093   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:07.473644   55174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56980 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:12:07.580277   55174 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:12:07.584381   55174 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:12:07.584406   55174 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:12:07.584414   55174 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:12:07.584419   55174 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:12:07.584429   55174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:12:07.584524   55174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:12:07.584735   55174 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:12:07.584937   55174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:12:07.599533   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:12:07.640919   55174 start.go:303] post-start completed in 220.960743ms
	I0213 19:12:07.641460   55174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:12:07.693767   55174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0213 19:12:07.694250   55174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:12:07.694316   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:07.746263   55174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56980 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:12:07.838254   55174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:12:07.843321   55174 start.go:128] duration metric: createHost completed in 6.32663669s
	I0213 19:12:07.843344   55174 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 6.326774128s
	I0213 19:12:07.843438   55174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:12:07.894941   55174 ssh_runner.go:195] Run: cat /version.json
	I0213 19:12:07.894965   55174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:12:07.895019   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:07.895049   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:07.953673   55174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56980 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:12:07.953660   55174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56980 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:12:08.151285   55174 ssh_runner.go:195] Run: systemctl --version
	I0213 19:12:08.156153   55174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 19:12:08.161333   55174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 19:12:08.203293   55174 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 19:12:08.203452   55174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 19:12:08.231676   55174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 19:12:08.259309   55174 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 19:12:08.259325   55174 start.go:475] detecting cgroup driver to use...
	I0213 19:12:08.259337   55174 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:12:08.259426   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:12:08.287855   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 19:12:08.303842   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:12:08.320136   55174 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:12:08.320202   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:12:08.337972   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:12:08.354122   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:12:08.371002   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:12:08.388519   55174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:12:08.404717   55174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:12:08.421599   55174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:12:08.438025   55174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:12:08.453367   55174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:12:08.523203   55174 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:12:08.619050   55174 start.go:475] detecting cgroup driver to use...
	I0213 19:12:08.619095   55174 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:12:08.619184   55174 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:12:08.649190   55174 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:12:08.649276   55174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:12:08.678020   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:12:08.715623   55174 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:12:08.721995   55174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:12:08.745816   55174 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:12:08.788206   55174 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:12:08.905489   55174 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:12:09.017318   55174 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:12:09.017408   55174 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:12:09.053113   55174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:12:09.124855   55174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:12:09.453034   55174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:12:09.481568   55174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:12:09.559690   55174 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 19:12:09.559798   55174 cli_runner.go:164] Run: docker exec -t old-k8s-version-187000 dig +short host.docker.internal
	I0213 19:12:09.693621   55174 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:12:09.693851   55174 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:12:09.699649   55174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:12:09.718873   55174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:12:09.778212   55174 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 19:12:09.778296   55174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:12:09.799554   55174 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:12:09.799570   55174 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 19:12:09.799637   55174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:12:09.815885   55174 ssh_runner.go:195] Run: which lz4
	I0213 19:12:09.821035   55174 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 19:12:09.825821   55174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 19:12:09.825859   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 19:12:16.110628   55174 docker.go:649] Took 6.289711 seconds to copy over tarball
	I0213 19:12:16.110737   55174 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 19:12:17.752414   55174 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.641671892s)
	I0213 19:12:17.752430   55174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 19:12:17.805984   55174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:12:17.822950   55174 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 19:12:17.856224   55174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:12:17.926279   55174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:12:18.426382   55174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:12:18.448246   55174 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:12:18.448258   55174 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 19:12:18.448267   55174 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 19:12:18.453262   55174 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:12:18.453356   55174 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:12:18.453477   55174 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 19:12:18.453513   55174 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:12:18.453504   55174 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:12:18.454091   55174 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 19:12:18.454515   55174 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:12:18.454603   55174 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:12:18.459982   55174 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 19:12:18.460101   55174 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:12:18.460138   55174 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:12:18.460247   55174 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 19:12:18.461349   55174 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:12:18.461435   55174 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:12:18.461560   55174 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:12:18.461453   55174 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:12:20.339272   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:12:20.358916   55174 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 19:12:20.358957   55174 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:12:20.359038   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:12:20.377553   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 19:12:20.431127   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:12:20.451413   55174 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 19:12:20.451437   55174 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:12:20.451491   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:12:20.471349   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 19:12:20.478964   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:12:20.479679   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:12:20.490202   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 19:12:20.499922   55174 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 19:12:20.499959   55174 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:12:20.500023   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:12:20.501059   55174 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 19:12:20.501080   55174 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:12:20.501142   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:12:20.512182   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 19:12:20.512822   55174 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 19:12:20.512849   55174 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 19:12:20.512939   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 19:12:20.523430   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 19:12:20.525195   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 19:12:20.525764   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 19:12:20.538437   55174 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 19:12:20.538468   55174 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 19:12:20.538547   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 19:12:20.541335   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 19:12:20.594256   55174 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 19:12:20.594282   55174 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:12:20.594343   55174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 19:12:20.605733   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 19:12:20.615016   55174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 19:12:21.242662   55174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:12:21.263161   55174 cache_images.go:92] LoadImages completed in 2.814897668s
	W0213 19:12:21.263217   55174 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0213 19:12:21.263296   55174 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:12:21.314323   55174 cni.go:84] Creating CNI manager for ""
	I0213 19:12:21.314340   55174 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 19:12:21.314355   55174 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:12:21.314371   55174 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-187000 NodeName:old-k8s-version-187000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 19:12:21.314459   55174 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-187000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-187000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:12:21.314506   55174 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-187000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:12:21.314566   55174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 19:12:21.329231   55174 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:12:21.329299   55174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:12:21.344368   55174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0213 19:12:21.373022   55174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 19:12:21.402054   55174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0213 19:12:21.430642   55174 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:12:21.435332   55174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:12:21.452532   55174 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000 for IP: 192.168.85.2
	I0213 19:12:21.452553   55174 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.452726   55174 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:12:21.452796   55174 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:12:21.452840   55174 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.key
	I0213 19:12:21.452856   55174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.crt with IP's: []
	I0213 19:12:21.580497   55174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.crt ...
	I0213 19:12:21.580512   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.crt: {Name:mk2b122c169061bb87f56a32f0b0ff9c71c53588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.580860   55174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.key ...
	I0213 19:12:21.580869   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.key: {Name:mka968d5d957b95e8bd77e593629ef65cd01a944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.581078   55174 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key.43b9df8c
	I0213 19:12:21.581093   55174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 19:12:21.714726   55174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt.43b9df8c ...
	I0213 19:12:21.714741   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt.43b9df8c: {Name:mkc23ac9979a6ed24c0b56a4b50e79083da52573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.715097   55174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key.43b9df8c ...
	I0213 19:12:21.715110   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key.43b9df8c: {Name:mke958fbf3294f0f81df20c78998534e19113910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.715355   55174 certs.go:337] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt.43b9df8c -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt
	I0213 19:12:21.715557   55174 certs.go:341] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key.43b9df8c -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key
	I0213 19:12:21.715811   55174 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key
	I0213 19:12:21.715826   55174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.crt with IP's: []
	I0213 19:12:21.799235   55174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.crt ...
	I0213 19:12:21.799247   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.crt: {Name:mk398667ce44e6e73fa533363b9b5d5315045450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.799519   55174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key ...
	I0213 19:12:21.799528   55174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key: {Name:mk25a2afe532ca387a7a1ea9ab2a1da594c59ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:12:21.799916   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:12:21.800004   55174 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:12:21.800016   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:12:21.800116   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:12:21.800150   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:12:21.800181   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:12:21.800248   55174 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:12:21.800745   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:12:21.841679   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 19:12:21.881219   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:12:21.921499   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:12:21.961411   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:12:22.002016   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:12:22.042226   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:12:22.082889   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:12:22.123110   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:12:22.163461   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:12:22.203551   55174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:12:22.244256   55174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:12:22.272917   55174 ssh_runner.go:195] Run: openssl version
	I0213 19:12:22.278557   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:12:22.294380   55174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:12:22.298538   55174 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:12:22.298593   55174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:12:22.305246   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:12:22.321687   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:12:22.337104   55174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:12:22.341609   55174 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:12:22.341674   55174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:12:22.348316   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:12:22.364240   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:12:22.379686   55174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:12:22.384050   55174 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:12:22.384100   55174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:12:22.390846   55174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:12:22.407180   55174 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:12:22.411412   55174 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 19:12:22.411459   55174 kubeadm.go:404] StartCluster: {Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:12:22.411550   55174 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:12:22.430897   55174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:12:22.445701   55174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:12:22.460564   55174 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:12:22.460630   55174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:12:22.475504   55174 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:12:22.475537   55174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:12:22.532015   55174 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:12:22.532088   55174 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:12:22.823495   55174 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:12:22.823582   55174 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:12:22.823655   55174 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:12:23.006287   55174 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:12:23.007071   55174 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:12:23.013227   55174 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:12:23.082286   55174 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:12:23.105244   55174 out.go:204]   - Generating certificates and keys ...
	I0213 19:12:23.105317   55174 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:12:23.105378   55174 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:12:23.199746   55174 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 19:12:23.263546   55174 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 19:12:23.362671   55174 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 19:12:23.561641   55174 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 19:12:23.681656   55174 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 19:12:23.682055   55174 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0213 19:12:23.985729   55174 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 19:12:23.986059   55174 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0213 19:12:24.082387   55174 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 19:12:24.226428   55174 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 19:12:24.489543   55174 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 19:12:24.489650   55174 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:12:24.663169   55174 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:12:24.702972   55174 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:12:24.806519   55174 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:12:25.190165   55174 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:12:25.190763   55174 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:12:25.213252   55174 out.go:204]   - Booting up control plane ...
	I0213 19:12:25.213364   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:12:25.213459   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:12:25.213547   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:12:25.213648   55174 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:12:25.213863   55174 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:13:05.200808   55174 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:13:05.201131   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:13:05.201270   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:13:10.202893   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:13:10.203050   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:13:20.203994   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:13:20.204155   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:13:40.205737   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:13:40.205892   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:14:20.207289   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:14:20.207473   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:14:20.207484   55174 kubeadm.go:322] 
	I0213 19:14:20.207515   55174 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:14:20.207558   55174 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:14:20.207571   55174 kubeadm.go:322] 
	I0213 19:14:20.207607   55174 kubeadm.go:322] This error is likely caused by:
	I0213 19:14:20.207645   55174 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:14:20.207734   55174 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:14:20.207744   55174 kubeadm.go:322] 
	I0213 19:14:20.207814   55174 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:14:20.207843   55174 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:14:20.207874   55174 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:14:20.207883   55174 kubeadm.go:322] 
	I0213 19:14:20.207967   55174 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:14:20.208042   55174 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:14:20.208103   55174 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:14:20.208135   55174 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:14:20.208185   55174 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:14:20.208207   55174 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:14:20.212464   55174 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:14:20.212558   55174 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:14:20.212673   55174 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:14:20.212766   55174 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:14:20.212827   55174 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:14:20.212879   55174 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 19:14:20.212962   55174 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-187000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 19:14:20.213006   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 19:14:20.635774   55174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:14:20.652993   55174 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:14:20.653071   55174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:14:20.668003   55174 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:14:20.668032   55174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:14:20.723846   55174 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:14:20.723887   55174 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:14:21.006613   55174 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:14:21.006698   55174 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:14:21.006778   55174 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:14:21.182490   55174 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:14:21.184535   55174 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:14:21.191438   55174 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:14:21.257769   55174 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:14:21.278823   55174 out.go:204]   - Generating certificates and keys ...
	I0213 19:14:21.278931   55174 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:14:21.279001   55174 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:14:21.279102   55174 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:14:21.279197   55174 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:14:21.279260   55174 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:14:21.279309   55174 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:14:21.279359   55174 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:14:21.279409   55174 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:14:21.279461   55174 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:14:21.279524   55174 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:14:21.279554   55174 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:14:21.279600   55174 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:14:21.339870   55174 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:14:21.469940   55174 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:14:21.598676   55174 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:14:21.720525   55174 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:14:21.721223   55174 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:14:21.744068   55174 out.go:204]   - Booting up control plane ...
	I0213 19:14:21.744159   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:14:21.744221   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:14:21.744275   55174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:14:21.744342   55174 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:14:21.744477   55174 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:15:01.730392   55174 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:15:01.731267   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:15:01.731484   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:15:06.732285   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:15:06.732490   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:15:16.734086   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:15:16.734244   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:15:36.735323   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:15:36.735488   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:16:16.737072   55174 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:16:16.737275   55174 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:16:16.737291   55174 kubeadm.go:322] 
	I0213 19:16:16.737351   55174 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:16:16.737436   55174 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:16:16.737453   55174 kubeadm.go:322] 
	I0213 19:16:16.737497   55174 kubeadm.go:322] This error is likely caused by:
	I0213 19:16:16.737530   55174 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:16:16.737603   55174 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:16:16.737607   55174 kubeadm.go:322] 
	I0213 19:16:16.737680   55174 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:16:16.737708   55174 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:16:16.737731   55174 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:16:16.737760   55174 kubeadm.go:322] 
	I0213 19:16:16.737863   55174 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:16:16.737971   55174 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:16:16.738056   55174 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:16:16.738096   55174 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:16:16.738168   55174 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:16:16.738197   55174 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:16:16.742001   55174 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:16:16.742086   55174 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:16:16.742186   55174 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:16:16.742288   55174 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:16:16.742400   55174 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:16:16.742464   55174 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 19:16:16.742496   55174 kubeadm.go:406] StartCluster complete in 3m54.332612671s
	I0213 19:16:16.742581   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:16:16.760307   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.760321   55174 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:16:16.760394   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:16:16.778333   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.778346   55174 logs.go:278] No container was found matching "etcd"
	I0213 19:16:16.778416   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:16:16.795745   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.795759   55174 logs.go:278] No container was found matching "coredns"
	I0213 19:16:16.795823   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:16:16.815345   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.815359   55174 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:16:16.815426   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:16:16.834729   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.834758   55174 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:16:16.834844   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:16:16.852782   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.852795   55174 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:16:16.852864   55174 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:16:16.870039   55174 logs.go:276] 0 containers: []
	W0213 19:16:16.870053   55174 logs.go:278] No container was found matching "kindnet"
	I0213 19:16:16.870061   55174 logs.go:123] Gathering logs for container status ...
	I0213 19:16:16.870068   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:16:16.933906   55174 logs.go:123] Gathering logs for kubelet ...
	I0213 19:16:16.933924   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:16:16.982055   55174 logs.go:123] Gathering logs for dmesg ...
	I0213 19:16:16.982074   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:16:17.004424   55174 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:16:17.004443   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:16:17.074652   55174 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:16:17.074663   55174 logs.go:123] Gathering logs for Docker ...
	I0213 19:16:17.074672   55174 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0213 19:16:17.097327   55174 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 19:16:17.097349   55174 out.go:239] * 
	* 
	W0213 19:16:17.097384   55174 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:16:17.097397   55174 out.go:239] * 
	* 
	W0213 19:16:17.098106   55174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 19:16:17.161677   55174 out.go:177] 
	W0213 19:16:17.219680   55174 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:16:17.219714   55174 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 19:16:17.219732   55174 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 19:16:17.314604   55174 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:12:04.787641413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27c338fc6291547d5f15f2c67ccf11fdad0b20a53d40597e882faa6e06914649",
	            "SandboxKey": "/var/run/docker/netns/27c338fc6291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56980"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56981"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "fd354e3d40a2d2b72f7312573a95b4b0fb2ce3310b393493b5189e0c0efbbe51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
E0213 19:16:17.676483   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 6 (406.569557ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 19:16:17.878114   56040 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-187000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (257.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml: exit status 1 (40.146042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-187000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:12:04.787641413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27c338fc6291547d5f15f2c67ccf11fdad0b20a53d40597e882faa6e06914649",
	            "SandboxKey": "/var/run/docker/netns/27c338fc6291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56980"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56981"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "fd354e3d40a2d2b72f7312573a95b4b0fb2ce3310b393493b5189e0c0efbbe51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 6 (404.752567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 19:16:18.377969   56053 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-187000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:12:04.787641413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27c338fc6291547d5f15f2c67ccf11fdad0b20a53d40597e882faa6e06914649",
	            "SandboxKey": "/var/run/docker/netns/27c338fc6291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56980"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56981"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "fd354e3d40a2d2b72f7312573a95b4b0fb2ce3310b393493b5189e0c0efbbe51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 6 (416.052499ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 19:16:18.852466   56065 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-187000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0213 19:16:21.656815   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.662129   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.672618   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.693496   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.733599   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.813910   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:21.976107   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:22.296580   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:22.937178   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:24.217984   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:26.778121   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:28.721718   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:16:31.898240   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:42.154191   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:16:45.380737   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
E0213 19:17:01.937742   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:17:02.657378   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:17:03.307128   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.312655   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.323786   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.344206   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.384908   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.465835   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.626124   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:03.946595   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:04.587296   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:05.868083   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:06.195448   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:17:08.431065   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:09.722274   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:17:13.553564   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:23.796193   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:43.624805   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:17:44.277815   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:17:46.705923   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:17:49.274509   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.207269989s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system: exit status 1 (38.768048ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-187000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:12:04.787641413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27c338fc6291547d5f15f2c67ccf11fdad0b20a53d40597e882faa6e06914649",
	            "SandboxKey": "/var/run/docker/netns/27c338fc6291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56980"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56981"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "fd354e3d40a2d2b72f7312573a95b4b0fb2ce3310b393493b5189e0c0efbbe51",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 6 (399.012641ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 19:18:04.596150   56107 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-187000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (509.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0213 19:18:14.392736   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:18:16.959357   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:18:25.239360   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:18:31.648893   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:19:05.545398   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:19:18.099291   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:19:22.354827   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:19:34.054138   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:19:40.444050   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 19:19:45.785802   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:19:47.161399   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
E0213 19:19:50.041881   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m26.667024766s)

                                                
                                                
-- stdout --
	* [old-k8s-version-187000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-187000 in cluster old-k8s-version-187000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Restarting existing docker container for "old-k8s-version-187000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 19:18:06.693490   56138 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:18:06.693783   56138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:18:06.693789   56138 out.go:304] Setting ErrFile to fd 2...
	I0213 19:18:06.693793   56138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:18:06.693980   56138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:18:06.695598   56138 out.go:298] Setting JSON to false
	I0213 19:18:06.719074   56138 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17545,"bootTime":1707863141,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:18:06.719196   56138 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:18:06.741373   56138 out.go:177] * [old-k8s-version-187000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:18:06.784212   56138 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:18:06.784322   56138 notify.go:220] Checking for updates...
	I0213 19:18:06.827927   56138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:18:06.856366   56138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:18:06.876641   56138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:18:06.897557   56138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:18:06.918798   56138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:18:06.940170   56138 config.go:182] Loaded profile config "old-k8s-version-187000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 19:18:06.961472   56138 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 19:18:06.982667   56138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:18:07.040058   56138 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:18:07.040219   56138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:18:07.149170   56138 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:18:07.138871445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:18:07.193260   56138 out.go:177] * Using the docker driver based on existing profile
	I0213 19:18:07.214523   56138 start.go:298] selected driver: docker
	I0213 19:18:07.214540   56138 start.go:902] validating driver "docker" against &{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:18:07.214626   56138 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:18:07.217993   56138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:18:07.326043   56138 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:18:07.31582106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:18:07.326268   56138 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 19:18:07.326318   56138 cni.go:84] Creating CNI manager for ""
	I0213 19:18:07.326331   56138 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 19:18:07.326340   56138 start_flags.go:321] config:
	{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:18:07.368200   56138 out.go:177] * Starting control plane node old-k8s-version-187000 in cluster old-k8s-version-187000
	I0213 19:18:07.389401   56138 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:18:07.410439   56138 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:18:07.431538   56138 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 19:18:07.431632   56138 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 19:18:07.431635   56138 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:18:07.431659   56138 cache.go:56] Caching tarball of preloaded images
	I0213 19:18:07.431871   56138 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:18:07.431890   56138 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 19:18:07.432718   56138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0213 19:18:07.484271   56138 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:18:07.484284   56138 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:18:07.484302   56138 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:18:07.484342   56138 start.go:365] acquiring machines lock for old-k8s-version-187000: {Name:mk0547224fc7a975c28768405bd89305d57998ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:18:07.484430   56138 start.go:369] acquired machines lock for "old-k8s-version-187000" in 66.697µs
	I0213 19:18:07.484451   56138 start.go:96] Skipping create...Using existing machine configuration
	I0213 19:18:07.484460   56138 fix.go:54] fixHost starting: 
	I0213 19:18:07.484684   56138 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Status}}
	I0213 19:18:07.538385   56138 fix.go:102] recreateIfNeeded on old-k8s-version-187000: state=Stopped err=<nil>
	W0213 19:18:07.538433   56138 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 19:18:07.560129   56138 out.go:177] * Restarting existing docker container for "old-k8s-version-187000" ...
	I0213 19:18:07.601552   56138 cli_runner.go:164] Run: docker start old-k8s-version-187000
	I0213 19:18:07.864485   56138 cli_runner.go:164] Run: docker container inspect old-k8s-version-187000 --format={{.State.Status}}
	I0213 19:18:07.922492   56138 kic.go:430] container "old-k8s-version-187000" state is running.
	I0213 19:18:07.923183   56138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:18:07.981508   56138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0213 19:18:07.981974   56138 machine.go:88] provisioning docker machine ...
	I0213 19:18:07.981999   56138 ubuntu.go:169] provisioning hostname "old-k8s-version-187000"
	I0213 19:18:07.982082   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:08.044426   56138 main.go:141] libmachine: Using SSH client type: native
	I0213 19:18:08.045004   56138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57241 <nil> <nil>}
	I0213 19:18:08.045035   56138 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-187000 && echo "old-k8s-version-187000" | sudo tee /etc/hostname
	I0213 19:18:08.049489   56138 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 19:18:11.213338   56138 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187000
	
	I0213 19:18:11.213426   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:11.266617   56138 main.go:141] libmachine: Using SSH client type: native
	I0213 19:18:11.266909   56138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57241 <nil> <nil>}
	I0213 19:18:11.266923   56138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-187000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-187000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-187000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:18:11.405191   56138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:18:11.405211   56138 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:18:11.405228   56138 ubuntu.go:177] setting up certificates
	I0213 19:18:11.405234   56138 provision.go:83] configureAuth start
	I0213 19:18:11.405304   56138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:18:11.457997   56138 provision.go:138] copyHostCerts
	I0213 19:18:11.458109   56138 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:18:11.458121   56138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:18:11.458262   56138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:18:11.458510   56138 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:18:11.458516   56138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:18:11.458615   56138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:18:11.458789   56138 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:18:11.458795   56138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:18:11.458890   56138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:18:11.459045   56138 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-187000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-187000]
	I0213 19:18:11.561178   56138 provision.go:172] copyRemoteCerts
	I0213 19:18:11.561250   56138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:18:11.561311   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:11.614243   56138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57241 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:18:11.719322   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:18:11.774105   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 19:18:11.814600   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 19:18:11.855493   56138 provision.go:86] duration metric: configureAuth took 450.24044ms
	I0213 19:18:11.855530   56138 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:18:11.855758   56138 config.go:182] Loaded profile config "old-k8s-version-187000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 19:18:11.855875   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:11.909424   56138 main.go:141] libmachine: Using SSH client type: native
	I0213 19:18:11.909728   56138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57241 <nil> <nil>}
	I0213 19:18:11.909738   56138 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:18:12.049694   56138 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:18:12.049710   56138 ubuntu.go:71] root file system type: overlay
	I0213 19:18:12.049824   56138 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:18:12.049901   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.103089   56138 main.go:141] libmachine: Using SSH client type: native
	I0213 19:18:12.103390   56138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57241 <nil> <nil>}
	I0213 19:18:12.103440   56138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:18:12.262226   56138 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:18:12.262337   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.315408   56138 main.go:141] libmachine: Using SSH client type: native
	I0213 19:18:12.315709   56138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57241 <nil> <nil>}
	I0213 19:18:12.315722   56138 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:18:12.466215   56138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:18:12.466233   56138 machine.go:91] provisioned docker machine in 4.484232832s
	I0213 19:18:12.466242   56138 start.go:300] post-start starting for "old-k8s-version-187000" (driver="docker")
	I0213 19:18:12.466249   56138 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:18:12.466326   56138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:18:12.466384   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.518849   56138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57241 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:18:12.622405   56138 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:18:12.626569   56138 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:18:12.626595   56138 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:18:12.626605   56138 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:18:12.626610   56138 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:18:12.626619   56138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:18:12.626720   56138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:18:12.626915   56138 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:18:12.627120   56138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:18:12.641640   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:18:12.682623   56138 start.go:303] post-start completed in 216.371383ms
	I0213 19:18:12.682697   56138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:18:12.682784   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.736227   56138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57241 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:18:12.829367   56138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:18:12.834405   56138 fix.go:56] fixHost completed within 5.34992183s
	I0213 19:18:12.834425   56138 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 5.349965838s
	I0213 19:18:12.834524   56138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187000
	I0213 19:18:12.886094   56138 ssh_runner.go:195] Run: cat /version.json
	I0213 19:18:12.886110   56138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:18:12.886177   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.886182   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:12.945843   56138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57241 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:18:12.945843   56138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57241 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/old-k8s-version-187000/id_rsa Username:docker}
	I0213 19:18:13.039193   56138 ssh_runner.go:195] Run: systemctl --version
	I0213 19:18:13.164540   56138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 19:18:13.169773   56138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 19:18:13.169845   56138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 19:18:13.185227   56138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 19:18:13.200557   56138 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0213 19:18:13.200577   56138 start.go:475] detecting cgroup driver to use...
	I0213 19:18:13.200594   56138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:18:13.200718   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:18:13.229304   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 19:18:13.245636   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:18:13.263288   56138 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:18:13.263346   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:18:13.280929   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:18:13.297240   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:18:13.313753   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:18:13.330315   56138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:18:13.346513   56138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:18:13.362861   56138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:18:13.377528   56138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:18:13.393320   56138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:18:13.455837   56138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:18:13.542130   56138 start.go:475] detecting cgroup driver to use...
	I0213 19:18:13.542150   56138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:18:13.542212   56138 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:18:13.560948   56138 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:18:13.561037   56138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:18:13.583645   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:18:13.618222   56138 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:18:13.623483   56138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:18:13.639943   56138 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:18:13.670068   56138 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:18:13.739619   56138 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:18:13.829652   56138 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:18:13.829742   56138 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:18:13.859673   56138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:18:13.925841   56138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:18:14.196770   56138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:18:14.220358   56138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:18:14.288649   56138 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 19:18:14.288773   56138 cli_runner.go:164] Run: docker exec -t old-k8s-version-187000 dig +short host.docker.internal
	I0213 19:18:14.398902   56138 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:18:14.398989   56138 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:18:14.403938   56138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:18:14.421045   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:14.474537   56138 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 19:18:14.474618   56138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:18:14.493719   56138 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:18:14.493742   56138 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 19:18:14.493808   56138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:18:14.509731   56138 ssh_runner.go:195] Run: which lz4
	I0213 19:18:14.513976   56138 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 19:18:14.518147   56138 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 19:18:14.518171   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 19:18:20.741418   56138 docker.go:649] Took 6.227475 seconds to copy over tarball
	I0213 19:18:20.741549   56138 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 19:18:22.385159   56138 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.643587789s)
	I0213 19:18:22.385173   56138 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 19:18:22.436719   56138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 19:18:22.453124   56138 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 19:18:22.483083   56138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:18:22.546675   56138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:18:23.038624   56138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:18:23.057503   56138 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 19:18:23.057534   56138 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 19:18:23.057546   56138 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 19:18:23.069467   56138 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:18:23.069879   56138 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:18:23.070022   56138 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:18:23.070108   56138 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:18:23.070313   56138 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 19:18:23.070332   56138 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:18:23.070646   56138 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:18:23.070649   56138 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 19:18:23.074843   56138 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:18:23.076542   56138 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:18:23.076640   56138 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 19:18:23.076698   56138 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:18:23.076997   56138 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:18:23.077101   56138 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 19:18:23.077145   56138 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:18:23.077126   56138 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:18:25.170109   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 19:18:25.194036   56138 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 19:18:25.194077   56138 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 19:18:25.194129   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 19:18:25.208377   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 19:18:25.218093   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 19:18:25.232246   56138 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 19:18:25.232275   56138 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 19:18:25.232324   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 19:18:25.245092   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:18:25.252166   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 19:18:25.265449   56138 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 19:18:25.265473   56138 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:18:25.265544   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 19:18:25.275281   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:18:25.280582   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:18:25.283912   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:18:25.288479   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 19:18:25.296034   56138 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 19:18:25.296047   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 19:18:25.296066   56138 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:18:25.296116   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 19:18:25.305279   56138 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 19:18:25.305321   56138 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:18:25.305425   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 19:18:25.307068   56138 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 19:18:25.307095   56138 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:18:25.307152   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 19:18:25.315773   56138 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:18:25.364336   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 19:18:25.364354   56138 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 19:18:25.364377   56138 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 19:18:25.364435   56138 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 19:18:25.375759   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 19:18:25.375796   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 19:18:25.393069   56138 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 19:18:25.393134   56138 cache_images.go:92] LoadImages completed in 2.335577236s
	W0213 19:18:25.393176   56138 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 19:18:25.393244   56138 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:18:25.444220   56138 cni.go:84] Creating CNI manager for ""
	I0213 19:18:25.444238   56138 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 19:18:25.444258   56138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:18:25.444274   56138 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-187000 NodeName:old-k8s-version-187000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 19:18:25.444366   56138 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-187000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-187000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:18:25.444488   56138 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-187000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:18:25.444548   56138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 19:18:25.460107   56138 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:18:25.460216   56138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:18:25.475529   56138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0213 19:18:25.506945   56138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 19:18:25.535484   56138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0213 19:18:25.566373   56138 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:18:25.570890   56138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:18:25.588515   56138 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000 for IP: 192.168.85.2
	I0213 19:18:25.588535   56138 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:18:25.588714   56138 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:18:25.588815   56138 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:18:25.588931   56138 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/client.key
	I0213 19:18:25.589011   56138 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key.43b9df8c
	I0213 19:18:25.589081   56138 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key
	I0213 19:18:25.589313   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:18:25.589362   56138 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:18:25.589372   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:18:25.589402   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:18:25.589434   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:18:25.589466   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:18:25.589535   56138 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:18:25.590032   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:18:25.631150   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 19:18:25.673381   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:18:25.715317   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/old-k8s-version-187000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:18:25.756728   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:18:25.799044   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:18:25.840523   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:18:25.881303   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:18:25.922751   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:18:25.965022   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:18:26.008508   56138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:18:26.051041   56138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:18:26.080527   56138 ssh_runner.go:195] Run: openssl version
	I0213 19:18:26.086607   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:18:26.102307   56138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:18:26.107006   56138 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:18:26.107055   56138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:18:26.114600   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:18:26.130504   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:18:26.146962   56138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:18:26.151269   56138 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:18:26.151316   56138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:18:26.157984   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:18:26.173181   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:18:26.189321   56138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:18:26.194226   56138 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:18:26.194279   56138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:18:26.201066   56138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:18:26.216391   56138 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:18:26.220635   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:18:26.227410   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:18:26.234841   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:18:26.241645   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:18:26.248522   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:18:26.255471   56138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:18:26.262507   56138 kubeadm.go:404] StartCluster: {Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:18:26.262623   56138 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:18:26.282385   56138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:18:26.298127   56138 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:18:26.298173   56138 kubeadm.go:636] restartCluster start
	I0213 19:18:26.298278   56138 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:18:26.313574   56138 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:26.313701   56138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-187000
	I0213 19:18:26.370545   56138 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-187000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:18:26.370700   56138 kubeconfig.go:146] "old-k8s-version-187000" context is missing from /Users/jenkins/minikube-integration/18165-38421/kubeconfig - will repair!
	I0213 19:18:26.371758   56138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:18:26.373581   56138 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:18:26.389488   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:26.389554   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:26.406558   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:26.890283   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:26.890370   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:26.907541   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:27.390295   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:27.390353   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:27.407088   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:27.890433   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:27.890516   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:27.907647   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:28.390569   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:28.390661   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:28.408334   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:28.890507   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:28.890592   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:28.908249   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:29.390335   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:29.390468   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:29.408611   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:29.889934   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:29.890039   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:29.908247   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:30.391561   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:30.391730   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:30.409383   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:30.890302   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:30.890437   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:30.907350   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:31.391595   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:31.391777   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:31.410187   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:31.890606   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:31.890705   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:31.909206   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:32.391150   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:32.391208   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:32.407584   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:32.890290   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:32.890429   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:32.907519   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:33.390310   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:33.390435   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:33.407968   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:33.890307   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:33.890416   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:33.907375   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:34.390316   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:34.390466   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:34.408229   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:34.890298   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:34.890419   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:34.907380   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:35.390289   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:35.390474   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:35.408265   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:35.890303   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:35.890415   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:35.907924   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:36.389834   56138 api_server.go:166] Checking apiserver status ...
	I0213 19:18:36.389913   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:18:36.406341   56138 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:18:36.406357   56138 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 19:18:36.406369   56138 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:18:36.406440   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:18:36.425325   56138 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:18:36.443469   56138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:18:36.458328   56138 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 14 03:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 14 03:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 14 03:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 14 03:14 /etc/kubernetes/scheduler.conf
	
	I0213 19:18:36.458393   56138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 19:18:36.473432   56138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 19:18:36.488111   56138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 19:18:36.503318   56138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 19:18:36.519449   56138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:18:36.535100   56138 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:18:36.535114   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:18:36.602912   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:18:37.305676   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:18:37.504399   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:18:37.593090   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:18:37.685973   56138 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:18:37.686047   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:38.186885   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:38.686130   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:39.186653   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:39.686618   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:40.187154   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:40.686299   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:41.186339   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:41.686217   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:42.186154   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:42.686121   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:43.186369   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:43.687465   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:44.187468   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:44.686507   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:45.186562   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:45.687035   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:46.186135   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:46.686210   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:47.188079   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:47.686282   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:48.187241   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:48.686777   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:49.187325   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:49.686720   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:50.186429   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:50.686977   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:51.187387   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:51.686628   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:52.186135   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:52.686152   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:53.186237   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:53.686269   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:54.187355   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:54.686135   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:55.186589   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:55.686536   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:56.186134   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:56.686163   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:57.186146   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:57.686183   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:58.186432   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:58.686162   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:59.186173   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:18:59.686159   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:00.186171   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:00.686303   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:01.186917   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:01.686201   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:02.186133   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:02.686330   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:03.186942   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:03.686170   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:04.186368   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:04.686230   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:05.186275   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:05.686157   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:06.186863   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:06.686947   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:07.187135   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:07.686128   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:08.186222   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:08.686101   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:09.186110   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:09.686154   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:10.186104   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:10.687290   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:11.186085   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:11.686771   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:12.186202   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:12.686749   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:13.187014   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:13.686939   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:14.186319   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:14.686039   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:15.187278   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:15.686056   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:16.186912   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:16.686098   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:17.186157   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:17.687117   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:18.187235   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:18.686823   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:19.188083   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:19.687506   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:20.186349   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:20.686038   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:21.186535   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:21.686411   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:22.186186   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:22.686044   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:23.186087   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:23.687143   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:24.186859   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:24.686470   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:25.186674   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:25.686999   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:26.186444   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:26.686230   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:27.186073   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:27.687144   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:28.186043   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:28.686139   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:29.186079   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:29.686945   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:30.187136   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:30.686034   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:31.186465   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:31.686730   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:32.186013   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:32.685997   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:33.186311   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:33.686047   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:34.186276   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:34.686119   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:35.186084   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:35.686369   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:36.186089   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:36.686201   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:37.186300   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:37.686535   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:37.706800   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.706814   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:37.706880   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:37.724897   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.724910   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:37.724984   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:37.744305   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.744318   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:37.744398   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:37.763727   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.763741   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:37.763806   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:37.781667   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.781682   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:37.781755   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:37.799267   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.799281   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:37.799349   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:37.818255   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.818269   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:37.818340   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:37.835065   56138 logs.go:276] 0 containers: []
	W0213 19:19:37.835078   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:37.835085   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:37.835092   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:37.877422   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:37.877439   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:37.897481   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:37.897496   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:37.962222   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:37.962237   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:37.962269   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:37.985331   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:37.985367   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:40.555533   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:40.579282   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:40.600041   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.600056   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:40.600130   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:40.620460   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.620472   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:40.620535   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:40.637866   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.637885   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:40.637962   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:40.655376   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.655391   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:40.655466   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:40.673761   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.673786   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:40.673860   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:40.692834   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.692853   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:40.692922   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:40.712369   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.712383   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:40.712450   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:40.730956   56138 logs.go:276] 0 containers: []
	W0213 19:19:40.730970   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:40.730978   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:40.730992   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:40.775288   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:40.775306   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:40.795725   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:40.795747   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:40.874111   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:40.874125   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:40.874144   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:40.895094   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:40.895109   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:43.460756   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:43.478210   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:43.495762   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.495778   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:43.495863   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:43.514738   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.514750   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:43.514816   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:43.532661   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.532674   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:43.532739   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:43.550635   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.550663   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:43.550737   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:43.569204   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.569221   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:43.569288   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:43.588557   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.588576   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:43.588653   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:43.607808   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.607823   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:43.607912   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:43.626791   56138 logs.go:276] 0 containers: []
	W0213 19:19:43.626804   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:43.626812   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:43.626823   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:43.670214   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:43.670231   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:43.690936   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:43.690951   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:43.754428   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:43.754444   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:43.754452   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:43.775344   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:43.775358   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:46.339076   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:46.356158   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:46.373288   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.373303   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:46.373369   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:46.392462   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.392475   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:46.392555   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:46.411157   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.411171   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:46.411257   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:46.429628   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.429643   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:46.429707   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:46.448138   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.448152   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:46.448218   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:46.466696   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.466709   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:46.466780   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:46.484983   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.484997   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:46.485083   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:46.506568   56138 logs.go:276] 0 containers: []
	W0213 19:19:46.506583   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:46.506593   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:46.506611   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:46.554882   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:46.554902   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:46.582246   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:46.582271   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:46.695061   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:46.715049   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:46.715059   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:46.736888   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:46.736904   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:49.301852   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:49.318641   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:49.338312   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.338327   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:49.338393   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:49.358802   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.358816   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:49.358885   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:49.378303   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.378317   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:49.378386   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:49.397814   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.397834   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:49.397907   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:49.418031   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.418045   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:49.418114   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:49.437357   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.437372   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:49.437440   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:49.455016   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.455031   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:49.455103   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:49.473821   56138 logs.go:276] 0 containers: []
	W0213 19:19:49.473833   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:49.473845   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:49.473856   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:49.536013   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:49.536025   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:49.536034   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:49.557492   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:49.557509   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:49.619365   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:49.619384   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:49.662173   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:49.662189   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:52.184720   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:52.203725   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:52.224869   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.224884   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:52.224952   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:52.246383   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.246397   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:52.246467   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:52.265296   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.265310   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:52.265377   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:52.285913   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.285928   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:52.285994   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:52.305723   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.305748   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:52.305842   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:52.327854   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.327867   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:52.327944   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:52.347805   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.347818   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:52.347886   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:52.367444   56138 logs.go:276] 0 containers: []
	W0213 19:19:52.367459   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:52.367467   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:52.367475   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:52.389227   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:52.389242   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:52.460826   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:52.460839   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:52.460850   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:52.486990   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:52.487008   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:52.568441   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:52.568458   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:55.119363   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:55.138370   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:55.159721   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.159737   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:55.159804   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:55.183090   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.183106   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:55.183173   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:55.202792   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.202806   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:55.202876   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:55.223089   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.223103   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:55.223173   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:55.243441   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.243459   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:55.243556   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:55.264943   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.264957   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:55.265047   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:55.286498   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.286516   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:55.286605   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:55.308460   56138 logs.go:276] 0 containers: []
	W0213 19:19:55.308475   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:55.308483   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:55.308490   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:55.363328   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:55.363355   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:55.387087   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:55.387102   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:55.463107   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:55.463119   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:55.463128   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:55.488900   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:55.488926   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:19:58.072435   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:19:58.091696   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:19:58.112400   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.112415   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:19:58.112490   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:19:58.133069   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.133084   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:19:58.133166   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:19:58.153952   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.153966   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:19:58.154036   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:19:58.173779   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.173793   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:19:58.173869   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:19:58.193865   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.193884   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:19:58.193968   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:19:58.215927   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.215943   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:19:58.216019   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:19:58.238689   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.238705   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:19:58.238785   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:19:58.258334   56138 logs.go:276] 0 containers: []
	W0213 19:19:58.258349   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:19:58.258356   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:19:58.258364   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:19:58.307852   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:19:58.307877   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:19:58.329320   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:19:58.329335   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:19:58.393882   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:19:58.393894   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:19:58.393902   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:19:58.417914   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:19:58.417934   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:00.987007   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:01.012892   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:01.034326   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.034340   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:01.034415   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:01.052304   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.052318   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:01.052381   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:01.071135   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.071149   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:01.071217   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:01.099296   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.099334   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:01.099518   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:01.127062   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.127090   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:01.127212   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:01.146023   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.146038   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:01.146108   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:01.164495   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.164514   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:01.164594   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:01.187803   56138 logs.go:276] 0 containers: []
	W0213 19:20:01.187823   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:01.187840   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:01.187856   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:01.240943   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:01.240958   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:01.261282   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:01.261299   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:01.344273   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:01.344303   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:01.344310   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:01.367792   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:01.367808   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:03.947325   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:03.967351   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:03.991082   56138 logs.go:276] 0 containers: []
	W0213 19:20:03.991097   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:03.991171   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:04.012734   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.012751   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:04.012823   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:04.036867   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.036881   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:04.036959   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:04.082234   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.082249   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:04.082325   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:04.101686   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.101701   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:04.101771   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:04.121666   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.121681   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:04.121748   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:04.141101   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.141116   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:04.141181   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:04.160053   56138 logs.go:276] 0 containers: []
	W0213 19:20:04.160088   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:04.160095   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:04.160102   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:04.226390   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:04.226406   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:04.269279   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:04.269294   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:04.289498   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:04.289518   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:04.355088   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:04.355104   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:04.355112   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:06.878153   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:06.896951   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:06.916715   56138 logs.go:276] 0 containers: []
	W0213 19:20:06.916736   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:06.916819   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:06.937287   56138 logs.go:276] 0 containers: []
	W0213 19:20:06.937302   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:06.937376   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:06.961842   56138 logs.go:276] 0 containers: []
	W0213 19:20:06.961859   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:06.961937   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:06.984573   56138 logs.go:276] 0 containers: []
	W0213 19:20:06.984610   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:06.984725   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:07.008849   56138 logs.go:276] 0 containers: []
	W0213 19:20:07.008867   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:07.008956   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:07.029935   56138 logs.go:276] 0 containers: []
	W0213 19:20:07.029953   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:07.030027   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:07.049957   56138 logs.go:276] 0 containers: []
	W0213 19:20:07.049971   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:07.050050   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:07.070629   56138 logs.go:276] 0 containers: []
	W0213 19:20:07.070643   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:07.070650   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:07.070657   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:07.124934   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:07.124956   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:07.146233   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:07.146249   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:07.228913   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:07.228924   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:07.228931   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:07.253032   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:07.253048   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:09.818326   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:09.835132   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:09.853963   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.853976   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:09.854041   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:09.872037   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.872051   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:09.872120   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:09.890796   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.890809   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:09.890878   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:09.909303   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.909318   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:09.909387   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:09.926570   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.926584   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:09.926657   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:09.946138   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.946171   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:09.946235   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:09.964636   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.964650   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:09.964721   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:09.985652   56138 logs.go:276] 0 containers: []
	W0213 19:20:09.985668   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:09.985675   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:09.985683   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:10.092089   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:10.092105   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:10.139410   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:10.139431   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:10.161293   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:10.161308   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:10.229784   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:10.229795   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:10.229803   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:12.752055   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:12.770230   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:12.789683   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.789695   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:12.789774   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:12.810167   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.810181   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:12.810293   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:12.829309   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.829322   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:12.829390   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:12.848805   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.848819   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:12.848919   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:12.866833   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.866849   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:12.866939   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:12.884675   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.884688   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:12.884768   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:12.904867   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.904881   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:12.904963   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:12.922633   56138 logs.go:276] 0 containers: []
	W0213 19:20:12.922647   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:12.922654   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:12.922661   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:12.944627   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:12.944642   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:13.012895   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:13.012913   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:13.104509   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:13.104535   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:13.127133   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:13.127149   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:13.194587   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:15.696198   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:15.713364   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:15.731786   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.731805   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:15.731893   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:15.752230   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.752254   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:15.752398   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:15.772741   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.772755   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:15.772823   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:15.791632   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.791648   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:15.791716   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:15.810950   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.810968   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:15.811035   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:15.828846   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.828876   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:15.828977   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:15.848181   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.848194   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:15.848266   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:15.866251   56138 logs.go:276] 0 containers: []
	W0213 19:20:15.866266   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:15.866277   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:15.866289   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:15.930461   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:15.930486   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:15.930514   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:15.953169   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:15.953183   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:16.016147   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:16.016163   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:16.060626   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:16.060644   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:18.583057   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:18.606305   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:18.626793   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.626806   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:18.626899   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:18.646845   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.646859   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:18.646917   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:18.664239   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.664252   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:18.664322   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:18.682865   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.682878   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:18.683009   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:18.703435   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.703454   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:18.703546   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:18.721178   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.721191   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:18.721265   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:18.741608   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.741623   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:18.741698   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:18.762341   56138 logs.go:276] 0 containers: []
	W0213 19:20:18.762356   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:18.762367   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:18.762378   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:18.787135   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:18.787162   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:18.864241   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:18.864288   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:18.864298   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:18.896536   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:18.896583   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:18.966485   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:18.966532   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:21.525320   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:21.547489   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:21.567728   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.567742   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:21.567809   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:21.593215   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.593232   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:21.593318   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:21.620757   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.620775   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:21.620860   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:21.643731   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.643745   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:21.643821   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:21.661776   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.661789   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:21.661854   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:21.681951   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.681966   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:21.682040   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:21.710842   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.734082   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:21.734173   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:21.754754   56138 logs.go:276] 0 containers: []
	W0213 19:20:21.754771   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:21.754778   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:21.754785   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:21.836335   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:21.836351   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:21.882514   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:21.882543   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:21.908300   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:21.908320   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:21.988935   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:21.988947   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:21.988955   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:24.521382   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:24.545565   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:24.568987   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.569004   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:24.569076   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:24.591063   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.591084   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:24.591163   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:24.616659   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.616685   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:24.616782   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:24.641660   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.641677   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:24.641787   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:24.662365   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.662377   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:24.662446   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:24.683852   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.683878   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:24.683958   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:24.707208   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.707225   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:24.707314   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:24.731082   56138 logs.go:276] 0 containers: []
	W0213 19:20:24.731102   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:24.731113   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:24.731124   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:24.806094   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:24.806110   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:24.862124   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:24.862145   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:24.888694   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:24.888718   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:24.974270   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:24.974296   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:24.974310   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:27.500225   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:27.519659   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:27.543594   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.543613   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:27.543687   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:27.568415   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.568431   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:27.568508   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:27.591280   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.591295   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:27.591371   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:27.612900   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.612917   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:27.613004   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:27.637044   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.637060   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:27.637154   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:27.659757   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.659773   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:27.659853   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:27.682345   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.682361   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:27.682426   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:27.704074   56138 logs.go:276] 0 containers: []
	W0213 19:20:27.704099   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:27.704110   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:27.704120   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:27.785503   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:27.785521   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:27.845966   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:27.845986   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:27.870357   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:27.870375   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:27.949259   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:27.949281   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:27.949290   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:30.476203   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:30.495887   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:30.520347   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.520362   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:30.520430   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:30.567089   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.567103   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:30.567168   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:30.588201   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.588216   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:30.588283   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:30.610144   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.610156   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:30.610239   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:30.628788   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.628802   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:30.628890   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:30.646910   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.646925   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:30.646993   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:30.664707   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.664726   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:30.664831   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:30.683772   56138 logs.go:276] 0 containers: []
	W0213 19:20:30.683785   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:30.683793   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:30.683801   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:30.747981   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:30.747997   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:30.793696   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:30.793714   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:30.814404   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:30.814427   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:30.881167   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:30.881179   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:30.881186   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:33.403641   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:33.422060   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:33.442452   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.442469   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:33.442561   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:33.462863   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.462875   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:33.462947   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:33.484810   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.484828   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:33.484902   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:33.508846   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.508863   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:33.508937   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:33.530258   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.530272   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:33.530341   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:33.553390   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.553406   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:33.553491   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:33.574314   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.574328   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:33.574397   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:33.596778   56138 logs.go:276] 0 containers: []
	W0213 19:20:33.596792   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:33.596801   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:33.596809   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:33.650522   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:33.650542   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:33.672648   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:33.672667   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:33.755162   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:33.755175   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:33.755184   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:33.777911   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:33.777929   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:36.348219   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:36.365480   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:36.385815   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.385830   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:36.385895   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:36.404638   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.404656   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:36.404752   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:36.424051   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.424066   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:36.424148   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:36.442836   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.442849   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:36.442917   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:36.461209   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.461223   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:36.461290   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:36.482848   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.482862   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:36.482927   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:36.504031   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.504053   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:36.504122   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:36.524768   56138 logs.go:276] 0 containers: []
	W0213 19:20:36.524783   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:36.524791   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:36.524807   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:36.570727   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:36.570746   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:36.592141   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:36.592157   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:36.658283   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:36.658295   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:36.658336   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:36.680372   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:36.680387   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:39.246171   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:39.263027   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:39.280945   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.280959   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:39.281026   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:39.299376   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.299390   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:39.299471   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:39.317795   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.317812   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:39.317879   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:39.336565   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.336580   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:39.336644   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:39.354670   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.354685   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:39.354753   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:39.372714   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.372727   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:39.372796   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:39.391330   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.391343   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:39.391425   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:39.408609   56138 logs.go:276] 0 containers: []
	W0213 19:20:39.408622   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:39.408629   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:39.408635   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:39.474084   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:39.474099   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:39.522800   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:39.522818   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:39.545194   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:39.545213   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:39.610953   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:39.610964   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:39.610978   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:42.132692   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:42.149381   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:42.169924   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.169937   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:42.170004   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:42.193968   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.193982   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:42.194061   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:42.214742   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.214757   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:42.214827   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:42.233872   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.233888   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:42.233960   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:42.252671   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.252684   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:42.252761   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:42.271379   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.271393   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:42.271477   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:42.289992   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.290008   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:42.290075   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:42.309481   56138 logs.go:276] 0 containers: []
	W0213 19:20:42.309506   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:42.309522   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:42.309534   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:42.356684   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:42.356707   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:42.379621   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:42.379648   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:42.449261   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:42.449277   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:42.449287   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:42.471925   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:42.471942   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:45.047549   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:45.067720   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:45.094174   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.094202   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:45.094288   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:45.119162   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.119176   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:45.119234   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:45.139482   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.139502   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:45.139594   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:45.157532   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.157547   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:45.157612   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:45.175020   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.175034   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:45.175103   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:45.195641   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.195656   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:45.195735   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:45.214231   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.214245   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:45.214310   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:45.233459   56138 logs.go:276] 0 containers: []
	W0213 19:20:45.233476   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:45.233489   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:45.233500   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:45.278577   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:45.278596   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:45.304169   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:45.304193   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:45.383598   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:45.383619   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:45.383639   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:45.408481   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:45.408497   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:47.979058   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:47.998048   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:48.018065   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.018082   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:48.018147   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:48.037968   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.037984   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:48.038050   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:48.055786   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.055799   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:48.055864   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:48.074041   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.074054   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:48.074111   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:48.092179   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.092192   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:48.092257   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:48.110463   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.110477   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:48.110545   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:48.129172   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.129219   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:48.129282   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:48.147528   56138 logs.go:276] 0 containers: []
	W0213 19:20:48.147541   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:48.147549   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:48.147556   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:48.194343   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:48.194362   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:48.216687   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:48.216704   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:48.306508   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:48.306522   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:48.306533   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:48.373193   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:48.373207   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:50.942208   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:50.960556   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:50.979525   56138 logs.go:276] 0 containers: []
	W0213 19:20:50.979539   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:50.979607   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:50.998786   56138 logs.go:276] 0 containers: []
	W0213 19:20:50.998804   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:50.998887   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:51.016703   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.016719   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:51.016788   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:51.036699   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.036717   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:51.036788   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:51.057430   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.057453   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:51.057521   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:51.076621   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.076637   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:51.076709   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:51.096615   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.096630   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:51.096701   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:51.115817   56138 logs.go:276] 0 containers: []
	W0213 19:20:51.115831   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:51.115838   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:51.115846   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:51.162131   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:51.162152   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:51.185470   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:51.185488   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:51.258091   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:51.258119   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:51.258128   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:51.281307   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:51.281325   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:53.848322   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:53.869927   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:53.890312   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.890326   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:53.890406   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:53.913537   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.913556   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:53.913653   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:53.936053   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.936071   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:53.936158   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:53.955951   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.955966   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:53.956036   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:53.976330   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.976346   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:53.976415   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:53.999040   56138 logs.go:276] 0 containers: []
	W0213 19:20:53.999054   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:53.999107   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:54.018906   56138 logs.go:276] 0 containers: []
	W0213 19:20:54.018927   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:54.018984   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:54.037050   56138 logs.go:276] 0 containers: []
	W0213 19:20:54.037067   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:54.037075   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:54.037082   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:54.082424   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:54.082441   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:54.103110   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:54.103125   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:54.170026   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:54.170043   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:54.170051   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:54.192439   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:54.192472   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:56.762571   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:56.779025   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:56.797680   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.797701   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:56.797813   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:56.816343   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.816356   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:56.816438   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:56.835875   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.835888   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:56.835951   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:56.855101   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.855116   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:56.855207   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:56.874654   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.874669   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:56.874732   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:56.893257   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.893274   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:56.893345   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:56.912213   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.912225   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:56.912284   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:56.930024   56138 logs.go:276] 0 containers: []
	W0213 19:20:56.930038   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:56.930045   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:56.930052   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:20:56.975643   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:20:56.975666   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:20:57.001554   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:20:57.001581   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:20:57.189843   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:20:57.189855   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:20:57.189908   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:20:57.216109   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:20:57.216138   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:20:59.796089   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:20:59.813046   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:20:59.832935   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.832948   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:20:59.833005   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:20:59.852350   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.852368   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:20:59.852439   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:20:59.870364   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.870387   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:20:59.870463   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:20:59.888477   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.888492   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:20:59.888567   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:20:59.909319   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.909333   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:20:59.909397   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:20:59.928016   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.928030   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:20:59.928087   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:20:59.947713   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.947725   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:20:59.947787   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:20:59.966436   56138 logs.go:276] 0 containers: []
	W0213 19:20:59.966469   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:20:59.966479   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:20:59.966486   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:00.015426   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:00.015448   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:00.036508   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:00.036523   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:00.104307   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:00.104323   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:00.104331   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:00.128824   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:00.128841   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:02.697764   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:02.715385   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:02.737505   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.737523   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:02.737596   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:02.761275   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.761290   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:02.761355   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:02.785087   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.785105   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:02.785194   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:02.805802   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.805818   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:02.805891   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:02.828333   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.828352   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:02.828449   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:02.849308   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.849323   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:02.849401   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:02.868644   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.868658   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:02.868723   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:02.886431   56138 logs.go:276] 0 containers: []
	W0213 19:21:02.886446   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:02.886453   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:02.886460   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:02.933337   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:02.933361   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:02.960082   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:02.960120   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:03.033016   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:03.033028   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:03.033035   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:03.055214   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:03.055229   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:05.624614   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:05.644814   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:05.665038   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.665052   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:05.665118   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:05.683498   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.683525   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:05.683627   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:05.705679   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.705695   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:05.705781   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:05.732944   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.732962   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:05.733043   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:05.755640   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.755654   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:05.755724   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:05.778278   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.778296   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:05.778373   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:05.802866   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.802881   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:05.802957   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:05.862991   56138 logs.go:276] 0 containers: []
	W0213 19:21:05.863026   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:05.863034   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:05.863041   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:05.914367   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:05.914390   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:05.939896   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:05.939912   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:06.009740   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:06.009755   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:06.009765   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:06.041052   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:06.041067   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:08.617031   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:08.635044   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:08.653314   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.653328   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:08.653425   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:08.673003   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.673017   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:08.673077   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:08.691641   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.691654   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:08.691719   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:08.709399   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.709413   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:08.709493   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:08.728779   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.728793   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:08.728864   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:08.748094   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.748110   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:08.748178   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:08.766913   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.766926   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:08.766989   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:08.785730   56138 logs.go:276] 0 containers: []
	W0213 19:21:08.785745   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:08.785752   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:08.785760   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:08.805147   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:08.805163   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:08.886070   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:08.886082   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:08.886090   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:08.907478   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:08.907494   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:08.972976   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:08.972992   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:11.519035   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:11.570252   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:11.590724   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.590737   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:11.590805   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:11.609609   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.609622   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:11.609688   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:11.629636   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.629650   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:11.629719   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:11.647806   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.647822   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:11.647892   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:11.671811   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.671846   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:11.672006   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:11.691571   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.691584   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:11.691654   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:11.709082   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.715138   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:11.715209   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:11.733626   56138 logs.go:276] 0 containers: []
	W0213 19:21:11.733640   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:11.733648   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:11.733655   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:11.777721   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:11.777736   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:11.798159   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:11.798176   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:11.871795   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:11.871807   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:11.871824   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:11.897252   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:11.897269   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:14.465160   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:14.481945   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:14.500724   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.500737   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:14.500799   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:14.519562   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.519582   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:14.519665   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:14.536517   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.536534   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:14.536634   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:14.556412   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.556425   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:14.556488   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:14.575517   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.575535   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:14.575623   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:14.593904   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.593917   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:14.593986   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:14.612133   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.612146   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:14.612213   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:14.629585   56138 logs.go:276] 0 containers: []
	W0213 19:21:14.629601   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:14.629608   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:14.629616   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:14.686690   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:14.686702   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:14.686712   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:14.708095   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:14.708109   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:14.777047   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:14.777065   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:14.825378   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:14.825405   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:17.346564   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:17.363163   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:17.381924   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.381937   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:17.382004   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:17.401894   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.401911   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:17.401989   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:17.422713   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.422737   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:17.422868   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:17.441747   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.441760   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:17.441829   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:17.459033   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.459047   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:17.459113   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:17.476705   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.476719   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:17.476783   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:17.496886   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.496900   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:17.496964   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:17.514553   56138 logs.go:276] 0 containers: []
	W0213 19:21:17.514567   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:17.514574   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:17.514582   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:17.556717   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:17.556734   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:17.577359   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:17.577377   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:17.651384   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:17.651398   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:17.651408   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:17.672455   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:17.672468   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:20.237527   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:20.254825   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:20.275950   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.275965   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:20.276042   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:20.297748   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.297763   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:20.297831   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:20.315909   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.315924   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:20.316005   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:20.335983   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.335997   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:20.336064   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:20.353920   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.353935   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:20.354008   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:20.373790   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.373807   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:20.373878   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:20.394763   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.394777   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:20.394856   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:20.416572   56138 logs.go:276] 0 containers: []
	W0213 19:21:20.416593   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:20.416605   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:20.416618   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:20.441113   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:20.441129   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:20.503755   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:20.503770   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:20.546350   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:20.546367   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:20.567657   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:20.567673   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:20.626847   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:23.128114   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:23.146271   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:23.166041   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.166055   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:23.166121   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:23.192002   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.192018   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:23.192110   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:23.221286   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.221308   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:23.221408   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:23.248775   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.248789   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:23.248858   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:23.268393   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.268416   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:23.268488   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:23.292898   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.292925   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:23.293032   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:23.317175   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.317197   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:23.317318   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:23.339062   56138 logs.go:276] 0 containers: []
	W0213 19:21:23.339091   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:23.339098   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:23.339120   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:23.419334   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:23.419350   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:23.419359   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:23.442806   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:23.442821   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:23.528279   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:23.528295   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:23.576385   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:23.576403   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:26.107322   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:26.126279   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:26.146986   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.147001   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:26.147076   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:26.171081   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.171095   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:26.171166   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:26.189570   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.189588   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:26.189661   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:26.213376   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.213394   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:26.213468   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:26.234851   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.234867   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:26.234936   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:26.258932   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.258947   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:26.259023   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:26.283686   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.283702   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:26.283773   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:26.308508   56138 logs.go:276] 0 containers: []
	W0213 19:21:26.308540   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:26.308554   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:26.308568   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:26.398355   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:26.398369   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:26.398379   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:26.422321   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:26.422336   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:26.487285   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:26.487301   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:26.530912   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:26.530928   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:29.050816   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:29.069945   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:29.088201   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.088216   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:29.088281   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:29.107166   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.107181   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:29.107253   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:29.125576   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.125590   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:29.125655   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:29.144300   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.144315   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:29.144404   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:29.165031   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.165044   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:29.165107   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:29.184310   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.184333   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:29.184416   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:29.203535   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.203549   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:29.203625   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:29.222855   56138 logs.go:276] 0 containers: []
	W0213 19:21:29.222868   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:29.222876   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:29.222882   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:29.242072   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:29.242088   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:29.307147   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:29.307161   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:29.307169   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:29.328841   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:29.328866   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:29.394882   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:29.394898   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:31.937583   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:31.954984   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:31.973044   56138 logs.go:276] 0 containers: []
	W0213 19:21:31.973058   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:31.973123   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:31.992627   56138 logs.go:276] 0 containers: []
	W0213 19:21:31.992642   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:31.992707   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:32.012356   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.012372   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:32.012448   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:32.032491   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.032506   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:32.032569   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:32.050479   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.050501   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:32.050577   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:32.068741   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.068755   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:32.068833   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:32.086665   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.086680   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:32.086753   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:32.105966   56138 logs.go:276] 0 containers: []
	W0213 19:21:32.105978   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:32.105985   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:32.105996   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:32.149837   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:32.149853   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:32.169880   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:32.169896   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:32.234122   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:32.234135   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:32.234147   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:32.255235   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:32.255251   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:34.821472   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:34.838916   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:34.857558   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.857572   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:34.857647   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:34.876753   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.876766   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:34.876832   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:34.895126   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.895141   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:34.895214   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:34.913811   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.913824   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:34.913891   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:34.931814   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.931828   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:34.931896   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:34.950948   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.950992   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:34.951108   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:34.971815   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.971830   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:34.971941   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:34.993929   56138 logs.go:276] 0 containers: []
	W0213 19:21:34.993962   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:34.993973   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:34.993984   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:35.044996   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:35.045017   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:35.079945   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:35.079961   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:35.155967   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:35.155978   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:35.155986   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:35.177957   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:35.177973   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:37.744338   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:37.761262   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:37.779077   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.779090   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:37.779179   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:37.797876   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.797889   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:37.797954   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:37.816674   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.816687   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:37.816757   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:37.834848   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.834862   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:37.834927   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:37.854276   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.854290   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:37.854357   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:37.873459   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.873472   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:37.873540   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:37.890345   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.890358   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:37.890435   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:37.909014   56138 logs.go:276] 0 containers: []
	W0213 19:21:37.909029   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:37.909036   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:37.909043   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:37.955001   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:37.955019   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:37.977154   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:37.977172   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:38.068735   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:38.068747   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:38.068762   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:38.091707   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:38.091723   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:40.656529   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:40.672911   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:40.690889   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.690901   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:40.690965   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:40.709598   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.709612   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:40.709685   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:40.726785   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.726799   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:40.726865   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:40.744656   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.744670   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:40.744740   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:40.761968   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.761982   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:40.762054   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:40.781452   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.781465   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:40.781530   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:40.801435   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.801452   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:40.801519   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:40.819424   56138 logs.go:276] 0 containers: []
	W0213 19:21:40.819446   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:40.819453   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:40.819461   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:40.882926   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:40.882939   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:40.928950   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:40.928965   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:40.950671   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:40.950688   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:41.065426   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:41.065439   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:41.065447   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:43.589620   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:43.607046   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:43.625408   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.625421   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:43.625483   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:43.646254   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.646267   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:43.646335   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:43.665460   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.665473   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:43.665555   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:43.684745   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.684760   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:43.684828   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:43.703201   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.703216   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:43.703297   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:43.724556   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.724570   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:43.724634   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:43.745142   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.745169   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:43.745246   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:43.770219   56138 logs.go:276] 0 containers: []
	W0213 19:21:43.770233   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:43.770240   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:43.770247   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:43.817147   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:43.817166   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:43.839145   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:43.839160   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:43.917269   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:43.917292   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:43.917300   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:43.940716   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:43.940733   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:46.505614   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:46.523247   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:46.541817   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.541832   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:46.541904   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:46.562618   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.562632   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:46.562700   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:46.582039   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.582054   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:46.582121   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:46.600999   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.601015   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:46.601085   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:46.620458   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.620472   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:46.620550   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:46.639845   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.639859   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:46.639934   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:46.657499   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.657512   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:46.657581   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:46.674659   56138 logs.go:276] 0 containers: []
	W0213 19:21:46.674673   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:46.674680   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:46.674686   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:46.716460   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:46.716991   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:46.737155   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:46.737172   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:46.800926   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:46.800945   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:46.800953   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:46.822101   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:46.822115   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:49.385640   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:49.403344   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:49.423006   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.423021   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:49.423086   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:49.442941   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.442958   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:49.443036   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:49.462114   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.462127   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:49.462194   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:49.481779   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.481792   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:49.481859   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:49.501656   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.501669   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:49.501740   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:49.520257   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.520271   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:49.520336   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:49.540544   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.540558   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:49.540624   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:49.558107   56138 logs.go:276] 0 containers: []
	W0213 19:21:49.558121   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:49.558128   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:49.558146   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:49.604809   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:49.604827   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:49.625416   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:49.625437   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:49.689561   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:49.689572   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:49.689580   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:49.711301   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:49.711315   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:52.288325   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:52.305156   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:52.324439   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.324453   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:52.324519   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:52.343549   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.343564   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:52.343628   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:52.362654   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.362667   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:52.362732   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:52.380082   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.380097   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:52.380163   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:52.399892   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.399908   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:52.399976   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:52.418789   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.418803   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:52.418869   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:52.437690   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.437704   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:52.437769   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:52.457171   56138 logs.go:276] 0 containers: []
	W0213 19:21:52.457200   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:52.457207   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:52.457215   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:52.505560   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:52.505579   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:52.528770   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:52.528789   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:52.616826   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:52.616839   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:52.616847   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:52.638863   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:52.638914   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:55.205902   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:55.224993   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:55.245407   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.245420   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:55.245485   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:55.264018   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.264033   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:55.264099   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:55.284392   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.284406   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:55.284467   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:55.304536   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.304550   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:55.304613   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:55.323573   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.323588   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:55.323655   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:55.343567   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.343584   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:55.343649   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:55.362163   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.362176   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:55.362241   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:55.381587   56138 logs.go:276] 0 containers: []
	W0213 19:21:55.381601   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:55.381608   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:55.381615   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:55.427340   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:55.427355   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:55.449482   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:55.449503   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:55.571482   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:55.571493   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:55.571501   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:55.594306   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:55.594321   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:21:58.163477   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:21:58.180590   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:21:58.199730   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.199744   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:21:58.199809   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:21:58.220356   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.220369   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:21:58.220431   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:21:58.239633   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.239647   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:21:58.239712   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:21:58.258624   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.258640   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:21:58.258708   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:21:58.277863   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.277877   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:21:58.277946   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:21:58.297042   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.297055   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:21:58.297121   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:21:58.315187   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.315200   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:21:58.315276   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:21:58.335027   56138 logs.go:276] 0 containers: []
	W0213 19:21:58.335042   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:21:58.335050   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:21:58.335057   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:21:58.378373   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:21:58.378389   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:21:58.399273   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:21:58.399330   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:21:58.469323   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:21:58.469335   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:21:58.469344   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:21:58.491671   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:21:58.491689   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:01.057355   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:01.075065   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:01.093595   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.093609   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:01.093677   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:01.112824   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.112838   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:01.112902   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:01.131677   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.131691   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:01.131754   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:01.152267   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.152298   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:01.152363   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:01.171908   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.171922   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:01.171986   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:01.191785   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.191799   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:01.191896   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:01.212117   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.212131   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:01.212249   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:01.231915   56138 logs.go:276] 0 containers: []
	W0213 19:22:01.231929   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:01.231936   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:01.231943   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:01.274416   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:01.274432   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:01.294185   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:01.294200   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:01.371925   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:01.371938   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:01.371968   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:01.394247   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:01.394261   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:03.960518   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:03.977531   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:03.999992   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.000011   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:04.000127   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:04.020787   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.020803   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:04.020871   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:04.042225   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.042238   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:04.042325   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:04.070659   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.070679   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:04.070747   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:04.089486   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.089500   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:04.089564   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:04.109414   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.109442   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:04.109509   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:04.129750   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.129765   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:04.129831   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:04.148604   56138 logs.go:276] 0 containers: []
	W0213 19:22:04.148618   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:04.148625   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:04.148632   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:04.214345   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:04.214356   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:04.214364   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:04.235492   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:04.235510   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:04.302388   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:04.302407   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:04.348554   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:04.348573   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:06.870635   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:06.888012   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:06.906716   56138 logs.go:276] 0 containers: []
	W0213 19:22:06.906729   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:06.906794   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:06.925674   56138 logs.go:276] 0 containers: []
	W0213 19:22:06.925687   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:06.925753   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:06.944107   56138 logs.go:276] 0 containers: []
	W0213 19:22:06.944120   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:06.944183   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:06.963531   56138 logs.go:276] 0 containers: []
	W0213 19:22:06.963547   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:06.963614   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:06.985426   56138 logs.go:276] 0 containers: []
	W0213 19:22:06.985442   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:06.985510   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:07.006831   56138 logs.go:276] 0 containers: []
	W0213 19:22:07.006844   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:07.006909   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:07.028095   56138 logs.go:276] 0 containers: []
	W0213 19:22:07.028109   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:07.028176   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:07.070811   56138 logs.go:276] 0 containers: []
	W0213 19:22:07.070826   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:07.070834   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:07.070841   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:07.137993   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:07.138007   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:07.182186   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:07.182201   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:07.203056   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:07.203072   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:07.269525   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:07.269536   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:07.269546   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:09.791507   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:09.807971   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:09.828727   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.828743   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:09.828815   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:09.854641   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.854665   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:09.854771   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:09.880670   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.880684   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:09.880751   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:09.900342   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.900358   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:09.900427   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:09.920414   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.920428   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:09.920492   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:09.940808   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.940821   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:09.940886   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:09.960539   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.960552   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:09.960619   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:09.979873   56138 logs.go:276] 0 containers: []
	W0213 19:22:09.979888   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:09.979897   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:09.979904   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:10.023496   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:10.023513   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:10.044177   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:10.044235   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:10.127668   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:10.127679   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:10.127687   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:10.150280   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:10.150296   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:12.718908   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:12.736904   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:12.756901   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.756915   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:12.756976   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:12.776323   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.776338   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:12.776404   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:12.794500   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.794515   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:12.794581   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:12.815077   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.815092   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:12.815162   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:12.834601   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.834616   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:12.834686   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:12.852495   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.852514   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:12.852609   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:12.871401   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.871415   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:12.871478   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:12.890332   56138 logs.go:276] 0 containers: []
	W0213 19:22:12.890347   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:12.890353   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:12.890361   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:12.933907   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:12.933921   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:12.954585   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:12.954601   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:13.022377   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:13.022392   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:13.022401   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:13.043981   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:13.043996   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:15.610828   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:15.628358   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:15.647290   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.647304   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:15.647374   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:15.666558   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.666573   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:15.666641   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:15.684455   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.684471   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:15.684539   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:15.706335   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.706351   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:15.706416   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:15.725756   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.725769   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:15.725854   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:15.746329   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.746342   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:15.746404   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:15.765661   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.765674   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:15.765740   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:15.784532   56138 logs.go:276] 0 containers: []
	W0213 19:22:15.784550   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:15.784560   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:15.784568   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:15.829119   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:15.829135   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:15.849654   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:15.849698   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:15.916622   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:15.916647   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:15.916669   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:15.938334   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:15.938349   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:18.508566   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:18.535546   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:18.574570   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.574586   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:18.574665   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:18.596169   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.596182   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:18.596258   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:18.618686   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.618702   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:18.618770   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:18.639860   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.639873   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:18.639937   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:18.658366   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.658383   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:18.658473   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:18.677349   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.677362   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:18.677426   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:18.697991   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.698005   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:18.698080   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:18.718848   56138 logs.go:276] 0 containers: []
	W0213 19:22:18.718862   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:18.718869   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:18.718878   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:18.787121   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:18.787135   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:18.829872   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:18.829889   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:18.850941   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:18.850958   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:18.936065   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:18.936084   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:18.936093   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:21.470634   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:21.492759   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:21.512204   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.512221   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:21.512303   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:21.532844   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.532859   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:21.532907   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:21.551634   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.551650   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:21.551718   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:21.570874   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.570888   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:21.570949   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:21.591733   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.591748   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:21.591821   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:21.611461   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.611475   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:21.611544   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:21.629286   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.629298   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:21.629360   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:21.647437   56138 logs.go:276] 0 containers: []
	W0213 19:22:21.647453   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:21.647460   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:21.647467   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:21.695031   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:21.716988   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:21.738501   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:21.738519   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:21.806916   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:21.806929   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:21.806939   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:21.864705   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:21.864723   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:24.428183   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:24.445015   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:24.461465   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.461479   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:24.461549   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:24.479186   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.479198   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:24.479263   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:24.497725   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.497741   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:24.497809   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:24.514830   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.514844   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:24.514916   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:24.531920   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.531933   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:24.532000   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:24.549987   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.550003   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:24.550068   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:24.566830   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.566844   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:24.566913   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:24.584668   56138 logs.go:276] 0 containers: []
	W0213 19:22:24.584682   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:24.584690   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:24.584697   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:24.604284   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:24.604303   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:24.665036   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:24.665047   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:24.665055   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:24.686698   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:24.686712   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:24.751041   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:24.751056   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:27.296224   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:27.314077   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:27.332308   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.332322   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:27.332386   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:27.351564   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.351577   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:27.351667   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:27.372573   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.372586   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:27.372702   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:27.392093   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.392106   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:27.392175   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:27.410503   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.410517   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:27.410585   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:27.429467   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.429481   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:27.429549   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:27.447301   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.447315   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:27.447385   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:27.465997   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.466031   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:27.466039   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:27.466046   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:27.529051   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:27.529097   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:27.529105   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:27.550697   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:27.550713   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:27.610238   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:27.610254   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:27.655380   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:27.655395   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:30.176798   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:30.200447   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:30.220092   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.220112   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:30.220202   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:30.240316   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.240337   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:30.240422   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:30.259474   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.259489   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:30.259576   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:30.278709   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.278729   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:30.278801   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:30.299457   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.299472   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:30.299541   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:30.319738   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.319751   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:30.319807   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:30.338774   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.338789   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:30.338911   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:30.357848   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.357861   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:30.357869   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:30.357877   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:30.411518   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:30.411537   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:30.437344   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:30.437375   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:30.574107   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:30.574118   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:30.574126   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:30.598003   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:30.598019   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:33.168870   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:33.185856   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:33.203524   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.203538   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:33.203607   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:33.221402   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.221416   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:33.221483   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:33.239857   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.239873   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:33.239948   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:33.260529   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.260543   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:33.260610   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:33.279506   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.279520   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:33.279591   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:33.298560   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.298576   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:33.298644   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:33.316444   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.316460   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:33.316527   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:33.334292   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.334306   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:33.334314   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:33.334322   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:33.411889   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:33.411901   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:33.411909   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:33.472890   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:33.472904   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:33.538372   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:33.538388   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:33.584119   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:33.584136   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:36.106276   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:36.122966   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:36.144273   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.144284   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:36.144344   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:36.163387   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.163421   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:36.163532   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:36.182571   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.182585   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:36.182653   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:36.200433   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.200451   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:36.200521   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:36.221155   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.221174   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:36.221257   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:36.242491   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.242504   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:36.242572   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:36.262887   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.262902   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:36.262969   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:36.281933   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.281947   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:36.281954   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:36.281961   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:36.328184   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:36.328201   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:36.348358   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:36.348416   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:36.423269   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:36.423281   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:36.423290   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:36.447544   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:36.447558   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:39.020846   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:39.038103   56138 kubeadm.go:640] restartCluster took 4m12.740512453s
	W0213 19:22:39.038147   56138 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0213 19:22:39.038171   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 19:22:39.471757   56138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:22:39.491688   56138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:22:39.509057   56138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:22:39.509122   56138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:22:39.527171   56138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:22:39.527216   56138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:22:39.588690   56138 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:22:39.588742   56138 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:22:39.886161   56138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:22:39.886266   56138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:22:39.886378   56138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:22:40.070636   56138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:22:40.071411   56138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:22:40.078764   56138 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:22:40.155649   56138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:22:40.225956   56138 out.go:204]   - Generating certificates and keys ...
	I0213 19:22:40.226039   56138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:22:40.226094   56138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:22:40.226153   56138 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:22:40.226234   56138 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:22:40.226292   56138 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:22:40.226347   56138 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:22:40.226412   56138 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:22:40.226466   56138 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:22:40.226563   56138 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:22:40.226637   56138 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:22:40.226685   56138 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:22:40.226742   56138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:22:40.282137   56138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:22:40.488550   56138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:22:40.665468   56138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:22:40.840979   56138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:22:40.842287   56138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:22:40.863914   56138 out.go:204]   - Booting up control plane ...
	I0213 19:22:40.864046   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:22:40.864136   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:22:40.864259   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:22:40.864329   56138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:22:40.864608   56138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:23:20.853969   56138 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:23:20.854772   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:20.854921   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:25.856609   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:25.856765   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:35.858542   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:35.858755   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:55.860565   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:55.860787   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:24:35.862294   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:24:35.862457   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:24:35.862465   56138 kubeadm.go:322] 
	I0213 19:24:35.862493   56138 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:24:35.862530   56138 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:24:35.862543   56138 kubeadm.go:322] 
	I0213 19:24:35.862569   56138 kubeadm.go:322] This error is likely caused by:
	I0213 19:24:35.862596   56138 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:24:35.862673   56138 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:24:35.862681   56138 kubeadm.go:322] 
	I0213 19:24:35.862754   56138 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:24:35.862777   56138 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:24:35.862810   56138 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:24:35.862821   56138 kubeadm.go:322] 
	I0213 19:24:35.862919   56138 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:24:35.863007   56138 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:24:35.863082   56138 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:24:35.863119   56138 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:24:35.863169   56138 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:24:35.863192   56138 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:24:35.866981   56138 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:24:35.867065   56138 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:24:35.867174   56138 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:24:35.867262   56138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:24:35.867331   56138 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:24:35.867388   56138 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 19:24:35.867452   56138 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 19:24:35.867491   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 19:24:36.285638   56138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:24:36.302881   56138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:24:36.302939   56138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:24:36.318363   56138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:24:36.318384   56138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:24:36.372580   56138 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:24:36.372625   56138 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:24:36.623082   56138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:24:36.623173   56138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:24:36.623249   56138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:24:36.797001   56138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:24:36.797799   56138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:24:36.804613   56138 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:24:36.877590   56138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:24:36.915795   56138 out.go:204]   - Generating certificates and keys ...
	I0213 19:24:36.915863   56138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:24:36.915937   56138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:24:36.916090   56138 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:24:36.916173   56138 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:24:36.916263   56138 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:24:36.916340   56138 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:24:36.916393   56138 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:24:36.916451   56138 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:24:36.916517   56138 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:24:36.916589   56138 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:24:36.916626   56138 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:24:36.916672   56138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:24:37.164228   56138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:24:37.229183   56138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:24:37.466008   56138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:24:37.660133   56138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:24:37.660795   56138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:24:37.683182   56138 out.go:204]   - Booting up control plane ...
	I0213 19:24:37.683293   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:24:37.683389   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:24:37.683465   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:24:37.683560   56138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:24:37.683739   56138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:25:17.671991   56138 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:25:17.672810   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:17.673046   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:22.675221   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:22.675453   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:32.676698   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:32.676938   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:52.679029   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:52.679196   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:26:32.680273   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:26:32.680430   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:26:32.680441   56138 kubeadm.go:322] 
	I0213 19:26:32.680468   56138 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:26:32.680499   56138 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:26:32.680503   56138 kubeadm.go:322] 
	I0213 19:26:32.680528   56138 kubeadm.go:322] This error is likely caused by:
	I0213 19:26:32.680554   56138 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:26:32.680636   56138 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:26:32.680647   56138 kubeadm.go:322] 
	I0213 19:26:32.680750   56138 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:26:32.680785   56138 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:26:32.680810   56138 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:26:32.680816   56138 kubeadm.go:322] 
	I0213 19:26:32.680893   56138 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:26:32.680975   56138 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:26:32.681045   56138 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:26:32.681090   56138 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:26:32.681156   56138 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:26:32.681182   56138 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:26:32.685164   56138 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:26:32.685231   56138 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:26:32.685343   56138 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:26:32.685431   56138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:26:32.685498   56138 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:26:32.685562   56138 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 19:26:32.685595   56138 kubeadm.go:406] StartCluster complete in 8m6.424289377s
	I0213 19:26:32.685681   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:26:32.704479   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.704493   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:26:32.704563   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:26:32.722967   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.722981   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:26:32.723049   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:26:32.742268   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.742281   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:26:32.742343   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:26:32.760735   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.760753   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:26:32.760843   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:26:32.779954   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.779967   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:26:32.780031   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:26:32.798895   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.798911   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:26:32.798982   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:26:32.817577   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.817592   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:26:32.817658   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:26:32.838146   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.838161   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:26:32.838169   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:26:32.838176   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:26:32.886666   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:26:32.886682   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:26:32.907423   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:26:32.907439   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:26:32.976242   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:26:32.976270   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:26:32.976278   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:26:32.998608   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:26:32.998623   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 19:26:33.063184   56138 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 19:26:33.063209   56138 out.go:239] * 
	* 
	W0213 19:26:33.063294   56138 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:26:33.063314   56138 out.go:239] * 
	* 
	W0213 19:26:33.064054   56138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 19:26:33.126810   56138 out.go:177] 
	W0213 19:26:33.169861   56138 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:26:33.169912   56138 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 19:26:33.169934   56138 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 19:26:33.211474   56138 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:18:07.857189614Z",
	            "FinishedAt": "2024-02-14T03:18:05.063436769Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d045fd4df5a483f35dc86c4e54cb8d1019191d338bf04e887522b7ef448b5799",
	            "SandboxKey": "/var/run/docker/netns/d045fd4df5a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57241"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57239"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57240"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "6d54b12cf11b7964d3b93a05495e26e918bbcb712685fd590903de422b0b5cf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (417.618701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25: (1.538872496s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-210000 sudo                                 | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:12 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-210000 sudo                                 | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-210000 sudo                                 | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:12 PST |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-210000 sudo find                            | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:12 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-210000 sudo crio                            | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:12 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-210000                                      | kubenet-210000         | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:12 PST |
	| start   | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:12 PST | 13 Feb 24 19:14 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-867000             | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:14 PST | 13 Feb 24 19:14 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:14 PST | 13 Feb 24 19:14 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-867000                  | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:14 PST | 13 Feb 24 19:14 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:14 PST | 13 Feb 24 19:20 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-187000        | old-k8s-version-187000 | jenkins | v1.32.0 | 13 Feb 24 19:16 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-187000                              | old-k8s-version-187000 | jenkins | v1.32.0 | 13 Feb 24 19:18 PST | 13 Feb 24 19:18 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-187000             | old-k8s-version-187000 | jenkins | v1.32.0 | 13 Feb 24 19:18 PST | 13 Feb 24 19:18 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-187000                              | old-k8s-version-187000 | jenkins | v1.32.0 | 13 Feb 24 19:18 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-867000 image list                           | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	| delete  | -p no-preload-867000                                   | no-preload-867000      | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	| start   | -p embed-certs-815000                                  | embed-certs-815000     | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:22 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-815000            | embed-certs-815000     | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-815000                                  | embed-certs-815000     | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-815000                 | embed-certs-815000     | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-815000                                  | embed-certs-815000     | jenkins | v1.32.0 | 13 Feb 24 19:22 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 19:22:31
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 19:22:31.071038   56575 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:22:31.071313   56575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:22:31.071319   56575 out.go:304] Setting ErrFile to fd 2...
	I0213 19:22:31.071323   56575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:22:31.071511   56575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:22:31.073188   56575 out.go:298] Setting JSON to false
	I0213 19:22:31.096858   56575 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17810,"bootTime":1707863141,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:22:31.096955   56575 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:22:31.118577   56575 out.go:177] * [embed-certs-815000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:22:31.162565   56575 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:22:31.162653   56575 notify.go:220] Checking for updates...
	I0213 19:22:31.205562   56575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:22:31.227525   56575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:22:31.248391   56575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:22:31.269500   56575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:22:31.291523   56575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:22:31.313159   56575 config.go:182] Loaded profile config "embed-certs-815000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:22:31.313937   56575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:22:31.370565   56575 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:22:31.370725   56575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:22:31.474495   56575 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:22:31.463664208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:22:31.517826   56575 out.go:177] * Using the docker driver based on existing profile
	I0213 19:22:31.539460   56575 start.go:298] selected driver: docker
	I0213 19:22:31.539491   56575 start.go:902] validating driver "docker" against &{Name:embed-certs-815000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-815000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:22:31.539588   56575 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:22:31.542586   56575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:22:31.650511   56575 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:22:31.640061189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:22:31.650720   56575 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 19:22:31.650774   56575 cni.go:84] Creating CNI manager for ""
	I0213 19:22:31.650786   56575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:22:31.650798   56575 start_flags.go:321] config:
	{Name:embed-certs-815000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-815000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:22:31.694358   56575 out.go:177] * Starting control plane node embed-certs-815000 in cluster embed-certs-815000
	I0213 19:22:27.296224   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:27.314077   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:27.332308   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.332322   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:27.332386   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:27.351564   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.351577   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:27.351667   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:27.372573   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.372586   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:27.372702   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:27.392093   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.392106   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:27.392175   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:27.410503   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.410517   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:27.410585   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:27.429467   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.429481   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:27.429549   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:27.447301   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.447315   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:27.447385   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:27.465997   56138 logs.go:276] 0 containers: []
	W0213 19:22:27.466031   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:27.466039   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:27.466046   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:27.529051   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:27.529097   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:27.529105   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:27.550697   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:27.550713   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:27.610238   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:27.610254   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:27.655380   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:27.655395   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:30.176798   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:30.200447   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:30.220092   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.220112   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:30.220202   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:30.240316   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.240337   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:30.240422   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:30.259474   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.259489   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:30.259576   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:30.278709   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.278729   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:30.278801   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:30.299457   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.299472   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:30.299541   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:30.319738   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.319751   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:30.319807   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:30.338774   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.338789   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:30.338911   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:30.357848   56138 logs.go:276] 0 containers: []
	W0213 19:22:30.357861   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:30.357869   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:30.357877   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:30.411518   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:30.411537   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:30.437344   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:30.437375   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:30.574107   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:30.574118   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:30.574126   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:30.598003   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:30.598019   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:31.715292   56575 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:22:31.736273   56575 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:22:31.757228   56575 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 19:22:31.757246   56575 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:22:31.757281   56575 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 19:22:31.757296   56575 cache.go:56] Caching tarball of preloaded images
	I0213 19:22:31.757430   56575 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:22:31.757440   56575 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 19:22:31.757978   56575 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/config.json ...
	I0213 19:22:31.814841   56575 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:22:31.814861   56575 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:22:31.814886   56575 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:22:31.814933   56575 start.go:365] acquiring machines lock for embed-certs-815000: {Name:mk994a4148122db75400e69c089572bd03d88982 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:22:31.815027   56575 start.go:369] acquired machines lock for "embed-certs-815000" in 74.125µs
	I0213 19:22:31.815051   56575 start.go:96] Skipping create...Using existing machine configuration
	I0213 19:22:31.815060   56575 fix.go:54] fixHost starting: 
	I0213 19:22:31.815303   56575 cli_runner.go:164] Run: docker container inspect embed-certs-815000 --format={{.State.Status}}
	I0213 19:22:31.868953   56575 fix.go:102] recreateIfNeeded on embed-certs-815000: state=Stopped err=<nil>
	W0213 19:22:31.868983   56575 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 19:22:31.890752   56575 out.go:177] * Restarting existing docker container for "embed-certs-815000" ...
	I0213 19:22:31.933657   56575 cli_runner.go:164] Run: docker start embed-certs-815000
	I0213 19:22:32.195875   56575 cli_runner.go:164] Run: docker container inspect embed-certs-815000 --format={{.State.Status}}
	I0213 19:22:32.254939   56575 kic.go:430] container "embed-certs-815000" state is running.
	I0213 19:22:32.255544   56575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-815000
	I0213 19:22:32.315271   56575 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/config.json ...
	I0213 19:22:32.315712   56575 machine.go:88] provisioning docker machine ...
	I0213 19:22:32.315739   56575 ubuntu.go:169] provisioning hostname "embed-certs-815000"
	I0213 19:22:32.315822   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:32.383183   56575 main.go:141] libmachine: Using SSH client type: native
	I0213 19:22:32.383542   56575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57378 <nil> <nil>}
	I0213 19:22:32.383558   56575 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-815000 && echo "embed-certs-815000" | sudo tee /etc/hostname
	I0213 19:22:32.384751   56575 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 19:22:35.553058   56575 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-815000
	
	I0213 19:22:35.553192   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:35.613003   56575 main.go:141] libmachine: Using SSH client type: native
	I0213 19:22:35.613307   56575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57378 <nil> <nil>}
	I0213 19:22:35.613321   56575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-815000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-815000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-815000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:22:35.755863   56575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:22:35.755892   56575 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:22:35.755911   56575 ubuntu.go:177] setting up certificates
	I0213 19:22:35.755916   56575 provision.go:83] configureAuth start
	I0213 19:22:35.756014   56575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-815000
	I0213 19:22:35.809131   56575 provision.go:138] copyHostCerts
	I0213 19:22:35.809239   56575 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:22:35.809250   56575 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:22:35.809391   56575 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:22:35.809632   56575 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:22:35.809638   56575 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:22:35.809719   56575 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:22:35.809898   56575 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:22:35.809905   56575 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:22:35.809986   56575 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:22:35.810130   56575 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.embed-certs-815000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-815000]
	I0213 19:22:35.901230   56575 provision.go:172] copyRemoteCerts
	I0213 19:22:35.901295   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:22:35.901363   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:35.955667   56575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57378 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/embed-certs-815000/id_rsa Username:docker}
	I0213 19:22:36.061321   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:22:33.168870   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:33.185856   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:33.203524   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.203538   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:33.203607   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:33.221402   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.221416   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:33.221483   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:33.239857   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.239873   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:33.239948   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:33.260529   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.260543   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:33.260610   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:33.279506   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.279520   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:33.279591   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:33.298560   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.298576   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:33.298644   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:33.316444   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.316460   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:33.316527   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:33.334292   56138 logs.go:276] 0 containers: []
	W0213 19:22:33.334306   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:33.334314   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:33.334322   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:33.411889   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:33.411901   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:33.411909   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:33.472890   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:33.472904   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:33.538372   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:33.538388   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:33.584119   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:33.584136   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:36.106276   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:36.122966   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:22:36.144273   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.144284   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:22:36.144344   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:22:36.163387   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.163421   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:22:36.163532   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:22:36.182571   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.182585   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:22:36.182653   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:22:36.200433   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.200451   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:22:36.200521   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:22:36.221155   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.221174   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:22:36.221257   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:22:36.242491   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.242504   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:22:36.242572   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:22:36.262887   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.262902   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:22:36.262969   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:22:36.281933   56138 logs.go:276] 0 containers: []
	W0213 19:22:36.281947   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:22:36.281954   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:22:36.281961   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:22:36.328184   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:22:36.328201   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:22:36.348358   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:22:36.348416   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:22:36.423269   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:22:36.423281   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:22:36.423290   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:22:36.447544   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:22:36.447558   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 19:22:36.101702   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 19:22:36.143906   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 19:22:36.188430   56575 provision.go:86] duration metric: configureAuth took 432.495709ms
	I0213 19:22:36.188449   56575 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:22:36.188609   56575 config.go:182] Loaded profile config "embed-certs-815000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:22:36.188673   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:36.244748   56575 main.go:141] libmachine: Using SSH client type: native
	I0213 19:22:36.245029   56575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57378 <nil> <nil>}
	I0213 19:22:36.245039   56575 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:22:36.383833   56575 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:22:36.383850   56575 ubuntu.go:71] root file system type: overlay
	I0213 19:22:36.383938   56575 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:22:36.384025   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:36.442306   56575 main.go:141] libmachine: Using SSH client type: native
	I0213 19:22:36.442613   56575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57378 <nil> <nil>}
	I0213 19:22:36.442692   56575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:22:36.610792   56575 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:22:36.610992   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:36.666903   56575 main.go:141] libmachine: Using SSH client type: native
	I0213 19:22:36.667187   56575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57378 <nil> <nil>}
	I0213 19:22:36.667201   56575 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:22:36.818880   56575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:22:36.818900   56575 machine.go:91] provisioned docker machine in 4.503190398s
	I0213 19:22:36.818910   56575 start.go:300] post-start starting for "embed-certs-815000" (driver="docker")
	I0213 19:22:36.818920   56575 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:22:36.818989   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:22:36.819046   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:36.873252   56575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57378 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/embed-certs-815000/id_rsa Username:docker}
	I0213 19:22:36.980484   56575 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:22:36.984573   56575 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:22:36.984595   56575 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:22:36.984603   56575 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:22:36.984608   56575 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:22:36.984617   56575 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:22:36.984715   56575 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:22:36.984902   56575 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:22:36.985115   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:22:37.000505   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:22:37.040901   56575 start.go:303] post-start completed in 221.953005ms
	I0213 19:22:37.041036   56575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:22:37.041128   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:37.094215   56575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57378 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/embed-certs-815000/id_rsa Username:docker}
	I0213 19:22:37.188983   56575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:22:37.194348   56575 fix.go:56] fixHost completed within 5.379300042s
	I0213 19:22:37.194362   56575 start.go:83] releasing machines lock for "embed-certs-815000", held for 5.379340917s
	I0213 19:22:37.194447   56575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-815000
	I0213 19:22:37.246817   56575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:22:37.246811   56575 ssh_runner.go:195] Run: cat /version.json
	I0213 19:22:37.246921   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:37.246921   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:37.303854   56575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57378 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/embed-certs-815000/id_rsa Username:docker}
	I0213 19:22:37.304003   56575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57378 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/embed-certs-815000/id_rsa Username:docker}
	I0213 19:22:37.508450   56575 ssh_runner.go:195] Run: systemctl --version
	I0213 19:22:37.514043   56575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 19:22:37.521156   56575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 19:22:37.553543   56575 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 19:22:37.553671   56575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 19:22:37.569773   56575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 19:22:37.569793   56575 start.go:475] detecting cgroup driver to use...
	I0213 19:22:37.569805   56575 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:22:37.569922   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:22:37.598975   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 19:22:37.615804   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:22:37.631975   56575 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:22:37.632133   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:22:37.649000   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:22:37.665249   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:22:37.682540   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:22:37.699106   56575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:22:37.716221   56575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:22:37.732862   56575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:22:37.748287   56575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:22:37.763072   56575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:22:37.828997   56575 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:22:37.918609   56575 start.go:475] detecting cgroup driver to use...
	I0213 19:22:37.918631   56575 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:22:37.918691   56575 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:22:37.936948   56575 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:22:37.937028   56575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:22:37.956203   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:22:37.990465   56575 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:22:37.995064   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:22:38.015751   56575 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:22:38.054881   56575 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:22:38.159720   56575 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:22:38.260993   56575 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:22:38.261196   56575 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:22:38.315870   56575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:22:38.383172   56575 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:22:38.660199   56575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 19:22:38.678221   56575 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 19:22:38.697323   56575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:22:38.715775   56575 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 19:22:38.785661   56575 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 19:22:38.850819   56575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:22:38.912200   56575 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 19:22:38.939284   56575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:22:38.956856   56575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:22:39.025157   56575 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 19:22:39.125850   56575 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 19:22:39.125939   56575 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 19:22:39.130633   56575 start.go:543] Will wait 60s for crictl version
	I0213 19:22:39.130686   56575 ssh_runner.go:195] Run: which crictl
	I0213 19:22:39.134824   56575 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 19:22:39.190632   56575 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 19:22:39.190707   56575 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:22:39.214099   56575 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:22:39.020846   56138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:39.038103   56138 kubeadm.go:640] restartCluster took 4m12.740512453s
	W0213 19:22:39.038147   56138 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0213 19:22:39.038171   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 19:22:39.471757   56138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:22:39.491688   56138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:22:39.509057   56138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:22:39.509122   56138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:22:39.527171   56138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:22:39.527216   56138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:22:39.588690   56138 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:22:39.588742   56138 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:22:39.886161   56138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:22:39.886266   56138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:22:39.886378   56138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:22:40.070636   56138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:22:40.071411   56138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:22:40.078764   56138 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:22:40.155649   56138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:22:40.225956   56138 out.go:204]   - Generating certificates and keys ...
	I0213 19:22:40.226039   56138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:22:40.226094   56138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:22:40.226153   56138 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:22:40.226234   56138 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:22:40.226292   56138 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:22:40.226347   56138 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:22:40.226412   56138 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:22:40.226466   56138 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:22:40.226563   56138 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:22:40.226637   56138 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:22:40.226685   56138 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:22:40.226742   56138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:22:40.282137   56138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:22:40.488550   56138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:22:40.665468   56138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:22:40.840979   56138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:22:40.842287   56138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:22:39.259664   56575 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 19:22:39.259757   56575 cli_runner.go:164] Run: docker exec -t embed-certs-815000 dig +short host.docker.internal
	I0213 19:22:39.384444   56575 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:22:39.384553   56575 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:22:39.389625   56575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:22:39.407760   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:39.466091   56575 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 19:22:39.466177   56575 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:22:39.487506   56575 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 19:22:39.487527   56575 docker.go:615] Images already preloaded, skipping extraction
	I0213 19:22:39.487612   56575 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:22:39.509639   56575 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 19:22:39.509656   56575 cache_images.go:84] Images are preloaded, skipping loading
	I0213 19:22:39.509743   56575 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:22:39.561704   56575 cni.go:84] Creating CNI manager for ""
	I0213 19:22:39.561723   56575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:22:39.561749   56575 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:22:39.561766   56575 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-815000 NodeName:embed-certs-815000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 19:22:39.561882   56575 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-815000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:22:39.561961   56575 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-815000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-815000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:22:39.562044   56575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 19:22:39.579922   56575 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:22:39.579998   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:22:39.596476   56575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0213 19:22:39.628619   56575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 19:22:39.660720   56575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0213 19:22:39.693908   56575 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:22:39.698639   56575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:22:39.717878   56575 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000 for IP: 192.168.67.2
	I0213 19:22:39.717901   56575 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:22:39.718098   56575 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:22:39.718208   56575 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:22:39.718347   56575 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/client.key
	I0213 19:22:39.718432   56575 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/apiserver.key.c7fa3a9e
	I0213 19:22:39.718505   56575 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/proxy-client.key
	I0213 19:22:39.718738   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:22:39.718786   56575 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:22:39.718796   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:22:39.718830   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:22:39.718864   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:22:39.718894   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:22:39.718966   56575 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:22:39.719677   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:22:39.766440   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 19:22:39.809899   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:22:39.858704   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/embed-certs-815000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:22:39.904081   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:22:39.950087   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:22:39.995371   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:22:40.040342   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:22:40.086890   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:22:40.139361   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:22:40.186959   56575 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:22:40.241613   56575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:22:40.271809   56575 ssh_runner.go:195] Run: openssl version
	I0213 19:22:40.278081   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:22:40.297993   56575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:22:40.304640   56575 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:22:40.304706   56575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:22:40.313258   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:22:40.330812   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:22:40.349687   56575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:22:40.354929   56575 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:22:40.354983   56575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:22:40.363162   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:22:40.381655   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:22:40.399658   56575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:22:40.404552   56575 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:22:40.404607   56575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:22:40.412676   56575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:22:40.432407   56575 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:22:40.438037   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:22:40.445429   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:22:40.453581   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:22:40.461289   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:22:40.468070   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:22:40.475693   56575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:22:40.483381   56575 kubeadm.go:404] StartCluster: {Name:embed-certs-815000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-815000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:22:40.483509   56575 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:22:40.505586   56575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:22:40.524968   56575 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:22:40.524990   56575 kubeadm.go:636] restartCluster start
	I0213 19:22:40.525052   56575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:22:40.542244   56575 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:40.542358   56575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-815000
	I0213 19:22:40.602440   56575 kubeconfig.go:135] verify returned: extract IP: "embed-certs-815000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:22:40.602606   56575 kubeconfig.go:146] "embed-certs-815000" context is missing from /Users/jenkins/minikube-integration/18165-38421/kubeconfig - will repair!
	I0213 19:22:40.602963   56575 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:22:40.604459   56575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:22:40.622429   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:40.622514   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:40.652601   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:40.863914   56138 out.go:204]   - Booting up control plane ...
	I0213 19:22:40.864046   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:22:40.864136   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:22:40.864259   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:22:40.864329   56138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:22:40.864608   56138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:22:41.123672   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:41.123788   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:41.143267   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:41.623600   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:41.623781   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:41.641788   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:42.122492   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:42.122554   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:42.139638   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:42.622626   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:42.622755   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:42.640648   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:43.122627   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:43.122759   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:43.140952   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:43.623671   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:43.623829   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:43.643139   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:44.122575   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:44.122669   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:44.139336   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:44.623560   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:44.623663   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:44.640491   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:45.124556   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:45.124773   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:45.142157   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:45.622852   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:45.622956   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:45.640109   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:46.122646   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:46.122787   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:46.141119   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:46.623655   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:46.623811   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:46.642124   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:47.122461   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:47.122544   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:47.139445   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:47.622829   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:47.623030   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:47.641860   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:48.123368   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:48.123453   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:48.140230   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:48.623208   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:48.623357   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:48.642584   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:49.123468   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:49.123574   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:49.140934   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:49.624117   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:49.624290   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:49.642263   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:50.123863   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:50.123967   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:50.141036   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:50.623851   56575 api_server.go:166] Checking apiserver status ...
	I0213 19:22:50.623991   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:22:50.641796   56575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:50.641814   56575 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 19:22:50.641831   56575 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:22:50.641899   56575 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:22:50.662312   56575 docker.go:483] Stopping containers: [09540b570469 62c6da71a74e 928a76956a21 99f44406baa1 43569865eda6 b61e1b9dbc1f ff5b76f85cbd f0322c428bd9 0ad15c86d35a d15621adf66c 97afb98b65de e8985f747dce f5877fe046dc 9bf377addf28 f06684b06f4a]
	I0213 19:22:50.662397   56575 ssh_runner.go:195] Run: docker stop 09540b570469 62c6da71a74e 928a76956a21 99f44406baa1 43569865eda6 b61e1b9dbc1f ff5b76f85cbd f0322c428bd9 0ad15c86d35a d15621adf66c 97afb98b65de e8985f747dce f5877fe046dc 9bf377addf28 f06684b06f4a
	I0213 19:22:50.682070   56575 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:22:50.700103   56575 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:22:50.715146   56575 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 14 03:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 14 03:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 14 03:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 14 03:21 /etc/kubernetes/scheduler.conf
	
	I0213 19:22:50.715222   56575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 19:22:50.730023   56575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 19:22:50.744649   56575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 19:22:50.758881   56575 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:50.758963   56575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 19:22:50.774016   56575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 19:22:50.788605   56575 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:22:50.788664   56575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 19:22:50.803568   56575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:22:50.819229   56575 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:22:50.819246   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:50.874881   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:51.439324   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:51.571019   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:51.636500   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:51.803194   56575 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:22:51.803330   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:52.303548   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:52.803364   56575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:22:52.830524   56575 api_server.go:72] duration metric: took 1.027332693s to wait for apiserver process to appear ...
	I0213 19:22:52.830546   56575 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:22:52.830587   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:54.829279   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 19:22:54.829320   56575 api_server.go:103] status: https://127.0.0.1:57382/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 19:22:54.829338   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:54.908563   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:22:54.908589   56575 api_server.go:103] status: https://127.0.0.1:57382/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:22:54.908601   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:54.916751   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:22:54.916789   56575 api_server.go:103] status: https://127.0.0.1:57382/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:22:55.332412   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:55.338529   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:22:55.338543   56575 api_server.go:103] status: https://127.0.0.1:57382/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:22:55.832309   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:55.837842   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:22:55.837863   56575 api_server.go:103] status: https://127.0.0.1:57382/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:22:56.331736   56575 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57382/healthz ...
	I0213 19:22:56.403343   56575 api_server.go:279] https://127.0.0.1:57382/healthz returned 200:
	ok
	I0213 19:22:56.416013   56575 api_server.go:141] control plane version: v1.28.4
	I0213 19:22:56.416041   56575 api_server.go:131] duration metric: took 3.585495145s to wait for apiserver health ...
	I0213 19:22:56.416056   56575 cni.go:84] Creating CNI manager for ""
	I0213 19:22:56.416081   56575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:22:56.440833   56575 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 19:22:56.462900   56575 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 19:22:56.520759   56575 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 19:22:56.625987   56575 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:22:56.704388   56575 system_pods.go:59] 8 kube-system pods found
	I0213 19:22:56.704431   56575 system_pods.go:61] "coredns-5dd5756b68-64t29" [f3c27154-abe7-4890-87ba-2b27de51b670] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 19:22:56.704437   56575 system_pods.go:61] "etcd-embed-certs-815000" [a969bfe3-aad6-4f1c-98c5-9a7750bea686] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:22:56.704443   56575 system_pods.go:61] "kube-apiserver-embed-certs-815000" [5a899c69-0ce9-4c3b-b308-a37ec571594a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:22:56.704448   56575 system_pods.go:61] "kube-controller-manager-embed-certs-815000" [b4093a37-095e-4f3f-80bf-7d927caeccb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:22:56.704459   56575 system_pods.go:61] "kube-proxy-bwbmc" [b49b3579-9183-4971-8bd1-1cc8ff43084f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 19:22:56.704472   56575 system_pods.go:61] "kube-scheduler-embed-certs-815000" [22f67215-f8ca-42fe-b110-ad8fac60a7af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:22:56.704488   56575 system_pods.go:61] "metrics-server-57f55c9bc5-xmg5j" [fd2f1f19-8319-4f15-be8f-dd8b9a1ff024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:22:56.704501   56575 system_pods.go:61] "storage-provisioner" [c8e0caa6-f6cc-4f22-89b7-39955a99aa32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 19:22:56.704508   56575 system_pods.go:74] duration metric: took 78.507264ms to wait for pod list to return data ...
	I0213 19:22:56.704514   56575 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:22:56.710009   56575 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:22:56.710030   56575 node_conditions.go:123] node cpu capacity is 12
	I0213 19:22:56.710042   56575 node_conditions.go:105] duration metric: took 5.524542ms to run NodePressure ...
	I0213 19:22:56.710059   56575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:22:57.426861   56575 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 19:22:57.431183   56575 kubeadm.go:787] kubelet initialised
	I0213 19:22:57.431195   56575 kubeadm.go:788] duration metric: took 4.318699ms waiting for restarted kubelet to initialise ...
	I0213 19:22:57.431202   56575 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 19:22:57.436986   56575 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-64t29" in "kube-system" namespace to be "Ready" ...
	I0213 19:22:59.443179   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:01.444995   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:03.446396   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:05.944349   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:07.946372   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:10.443477   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:12.444522   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:14.943218   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:17.442896   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:19.444625   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:20.853969   56138 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:23:20.854772   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:20.854921   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:21.943661   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:23.948947   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:25.856609   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:25.856765   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:26.445640   56575 pod_ready.go:102] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:28.944695   56575 pod_ready.go:92] pod "coredns-5dd5756b68-64t29" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:28.944707   56575 pod_ready.go:81] duration metric: took 31.507786132s waiting for pod "coredns-5dd5756b68-64t29" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.944714   56575 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.949808   56575 pod_ready.go:92] pod "etcd-embed-certs-815000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:28.949819   56575 pod_ready.go:81] duration metric: took 5.100493ms waiting for pod "etcd-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.949826   56575 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.956124   56575 pod_ready.go:92] pod "kube-apiserver-embed-certs-815000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:28.956135   56575 pod_ready.go:81] duration metric: took 6.304706ms waiting for pod "kube-apiserver-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.956141   56575 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.960761   56575 pod_ready.go:92] pod "kube-controller-manager-embed-certs-815000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:28.960770   56575 pod_ready.go:81] duration metric: took 4.624501ms waiting for pod "kube-controller-manager-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.960777   56575 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwbmc" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.965409   56575 pod_ready.go:92] pod "kube-proxy-bwbmc" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:28.965419   56575 pod_ready.go:81] duration metric: took 4.637345ms waiting for pod "kube-proxy-bwbmc" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:28.965426   56575 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:29.343031   56575 pod_ready.go:92] pod "kube-scheduler-embed-certs-815000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:23:29.343043   56575 pod_ready.go:81] duration metric: took 377.613349ms waiting for pod "kube-scheduler-embed-certs-815000" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:29.343050   56575 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace to be "Ready" ...
	I0213 19:23:31.351747   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:33.849451   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:35.858542   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:35.858755   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:36.349400   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:38.850346   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:41.349525   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:43.350818   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:45.850247   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:48.348467   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:50.350145   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:52.402470   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:54.850353   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:55.860565   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:23:55.860787   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:23:56.851265   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:23:59.348791   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:01.850498   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:04.348090   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:06.349090   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:08.851388   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:11.349318   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:13.349563   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:15.849054   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:18.351669   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:20.850330   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:23.349196   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:25.848869   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:27.850179   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:29.851231   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:32.348844   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:34.349388   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:35.862294   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:24:35.862457   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:24:35.862465   56138 kubeadm.go:322] 
	I0213 19:24:35.862493   56138 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:24:35.862530   56138 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:24:35.862543   56138 kubeadm.go:322] 
	I0213 19:24:35.862569   56138 kubeadm.go:322] This error is likely caused by:
	I0213 19:24:35.862596   56138 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:24:35.862673   56138 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:24:35.862681   56138 kubeadm.go:322] 
	I0213 19:24:35.862754   56138 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:24:35.862777   56138 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:24:35.862810   56138 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:24:35.862821   56138 kubeadm.go:322] 
	I0213 19:24:35.862919   56138 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:24:35.863007   56138 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:24:35.863082   56138 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:24:35.863119   56138 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:24:35.863169   56138 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:24:35.863192   56138 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:24:35.866981   56138 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:24:35.867065   56138 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:24:35.867174   56138 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:24:35.867262   56138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:24:35.867331   56138 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:24:35.867388   56138 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 19:24:35.867452   56138 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 19:24:35.867491   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 19:24:36.285638   56138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:24:36.302881   56138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:24:36.302939   56138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:24:36.318363   56138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:24:36.318384   56138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:24:36.372580   56138 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 19:24:36.372625   56138 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:24:36.623082   56138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:24:36.623173   56138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:24:36.623249   56138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:24:36.797001   56138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:24:36.797799   56138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:24:36.804613   56138 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 19:24:36.877590   56138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:24:36.915795   56138 out.go:204]   - Generating certificates and keys ...
	I0213 19:24:36.915863   56138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:24:36.915937   56138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:24:36.916090   56138 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:24:36.916173   56138 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:24:36.916263   56138 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:24:36.916340   56138 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:24:36.916393   56138 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:24:36.916451   56138 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:24:36.916517   56138 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:24:36.916589   56138 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:24:36.916626   56138 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:24:36.916672   56138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:24:37.164228   56138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:24:37.229183   56138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:24:37.466008   56138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:24:37.660133   56138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:24:37.660795   56138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:24:36.349799   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:38.849302   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:37.683182   56138 out.go:204]   - Booting up control plane ...
	I0213 19:24:37.683293   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:24:37.683389   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:24:37.683465   56138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:24:37.683560   56138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:24:37.683739   56138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:24:41.348487   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:43.349925   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:45.849010   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:47.849428   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:50.348583   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:52.349091   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:54.848524   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:56.849917   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:24:58.851627   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:01.348968   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:03.849478   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:05.850372   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:08.349383   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:10.850169   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:13.349201   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:15.350162   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:17.849448   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:20.348795   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:17.671991   56138 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 19:25:17.672810   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:17.673046   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:22.349121   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:24.849992   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:22.675221   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:22.675453   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:27.348859   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:29.849977   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:32.349207   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:34.349333   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:32.676698   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:32.676938   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:36.349562   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:38.350819   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:40.849527   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:43.348210   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:45.349763   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:47.848905   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:49.849013   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:51.850500   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:54.349813   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:52.679029   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:25:52.679196   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:25:56.848802   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:25:59.348622   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:01.350153   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:03.848918   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:05.849348   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:08.350844   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:10.847923   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:12.849926   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:15.349708   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:17.848073   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:19.849703   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:22.350340   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:24.350353   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:26.848871   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:29.350006   56575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xmg5j" in "kube-system" namespace has status "Ready":"False"
	I0213 19:26:32.680273   56138 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 19:26:32.680430   56138 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 19:26:32.680441   56138 kubeadm.go:322] 
	I0213 19:26:32.680468   56138 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 19:26:32.680499   56138 kubeadm.go:322] 	timed out waiting for the condition
	I0213 19:26:32.680503   56138 kubeadm.go:322] 
	I0213 19:26:32.680528   56138 kubeadm.go:322] This error is likely caused by:
	I0213 19:26:32.680554   56138 kubeadm.go:322] 	- The kubelet is not running
	I0213 19:26:32.680636   56138 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 19:26:32.680647   56138 kubeadm.go:322] 
	I0213 19:26:32.680750   56138 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 19:26:32.680785   56138 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 19:26:32.680810   56138 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 19:26:32.680816   56138 kubeadm.go:322] 
	I0213 19:26:32.680893   56138 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 19:26:32.680975   56138 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 19:26:32.681045   56138 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 19:26:32.681090   56138 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 19:26:32.681156   56138 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 19:26:32.681182   56138 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 19:26:32.685164   56138 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 19:26:32.685231   56138 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 19:26:32.685343   56138 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 19:26:32.685431   56138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:26:32.685498   56138 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 19:26:32.685562   56138 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 19:26:32.685595   56138 kubeadm.go:406] StartCluster complete in 8m6.424289377s
	I0213 19:26:32.685681   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 19:26:32.704479   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.704493   56138 logs.go:278] No container was found matching "kube-apiserver"
	I0213 19:26:32.704563   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 19:26:32.722967   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.722981   56138 logs.go:278] No container was found matching "etcd"
	I0213 19:26:32.723049   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 19:26:32.742268   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.742281   56138 logs.go:278] No container was found matching "coredns"
	I0213 19:26:32.742343   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 19:26:32.760735   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.760753   56138 logs.go:278] No container was found matching "kube-scheduler"
	I0213 19:26:32.760843   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 19:26:32.779954   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.779967   56138 logs.go:278] No container was found matching "kube-proxy"
	I0213 19:26:32.780031   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 19:26:32.798895   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.798911   56138 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 19:26:32.798982   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 19:26:32.817577   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.817592   56138 logs.go:278] No container was found matching "kindnet"
	I0213 19:26:32.817658   56138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 19:26:32.838146   56138 logs.go:276] 0 containers: []
	W0213 19:26:32.838161   56138 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 19:26:32.838169   56138 logs.go:123] Gathering logs for kubelet ...
	I0213 19:26:32.838176   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 19:26:32.886666   56138 logs.go:123] Gathering logs for dmesg ...
	I0213 19:26:32.886682   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 19:26:32.907423   56138 logs.go:123] Gathering logs for describe nodes ...
	I0213 19:26:32.907439   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 19:26:32.976242   56138 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 19:26:32.976270   56138 logs.go:123] Gathering logs for Docker ...
	I0213 19:26:32.976278   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 19:26:32.998608   56138 logs.go:123] Gathering logs for container status ...
	I0213 19:26:32.998623   56138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 19:26:33.063184   56138 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 19:26:33.063209   56138 out.go:239] * 
	W0213 19:26:33.063294   56138 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:26:33.063314   56138 out.go:239] * 
	W0213 19:26:33.064054   56138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 19:26:33.126810   56138 out.go:177] 
	W0213 19:26:33.169861   56138 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 19:26:33.169912   56138 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 19:26:33.169934   56138 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 19:26:33.211474   56138 out.go:177] 
	
	
	==> Docker <==
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.038417857Z" level=info msg="Loading containers: start."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.126201672Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.165065133Z" level=info msg="Loading containers: done."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172861120Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172921650Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192080634Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:14 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192148976Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.560647039Z" level=info msg="Processing signal 'terminated'"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.561661074Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.562254523Z" level=info msg="Daemon shutdown complete"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.624136291Z" level=info msg="Starting up"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.631799867Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.875735799Z" level=info msg="Loading containers: start."
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.970706234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.007890871Z" level=info msg="Loading containers: done."
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015528496Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015592390Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033874246Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033915504Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:23 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-14T03:26:34Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:26:35 up  2:05,  0 users,  load average: 5.28, 5.13, 5.07
	Linux old-k8s-version-187000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 03:26:33 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:26:33 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 149.
	Feb 14 03:26:33 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:26:33 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: I0214 03:26:34.108613   19198 server.go:410] Version: v1.16.0
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: I0214 03:26:34.108971   19198 plugins.go:100] No cloud provider specified.
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: I0214 03:26:34.108982   19198 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: I0214 03:26:34.110731   19198 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: W0214 03:26:34.112431   19198 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: W0214 03:26:34.112505   19198 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:26:34 old-k8s-version-187000 kubelet[19198]: F0214 03:26:34.112534   19198 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:26:34 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:26:34 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:26:34 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 150.
	Feb 14 03:26:34 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:26:34 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: I0214 03:26:35.086016   19308 server.go:410] Version: v1.16.0
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: I0214 03:26:35.086215   19308 plugins.go:100] No cloud provider specified.
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: I0214 03:26:35.086224   19308 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: I0214 03:26:35.087909   19308 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: W0214 03:26:35.088601   19308 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: W0214 03:26:35.088665   19308 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:26:35 old-k8s-version-187000 kubelet[19308]: F0214 03:26:35.088688   19308 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:26:35 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:26:35 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (414.951933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-187000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (509.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:27:03.314259   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:27:06.226039   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:27:40.764383   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:27:46.705355   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:27:49.272918   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:29:09.751502   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:29:12.318570   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:29:18.097951   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:29:40.442662   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:29:50.066276   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:30:23.922518   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:30:41.146400   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:30:45.401567   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:30:47.802372   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:31:17.718555   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
E0213 19:31:21.699669   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:31:47.094518   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:32:03.382873   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:32:10.918988   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:32:44.815230   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:32:46.775371   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:32:49.342533   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:33:26.431778   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:34:18.170667   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:34:22.425336   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:34:22.453575   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:34:34.124785   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:34:40.514341   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (426.64465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-187000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:18:07.857189614Z",
	            "FinishedAt": "2024-02-14T03:18:05.063436769Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d045fd4df5a483f35dc86c4e54cb8d1019191d338bf04e887522b7ef448b5799",
	            "SandboxKey": "/var/run/docker/netns/d045fd4df5a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57241"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57239"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57240"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "6d54b12cf11b7964d3b93a05495e26e918bbcb712685fd590903de422b0b5cf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (425.650041ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25: (1.639022193s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-187000        | old-k8s-version-187000       | jenkins | v1.32.0 | 13 Feb 24 19:16 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-187000                              | old-k8s-version-187000       | jenkins | v1.32.0 | 13 Feb 24 19:18 PST | 13 Feb 24 19:18 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-187000             | old-k8s-version-187000       | jenkins | v1.32.0 | 13 Feb 24 19:18 PST | 13 Feb 24 19:18 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-187000                              | old-k8s-version-187000       | jenkins | v1.32.0 | 13 Feb 24 19:18 PST |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | no-preload-867000 image list                           | no-preload-867000            | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-867000                                   | no-preload-867000            | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-867000                                   | no-preload-867000            | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-867000                                   | no-preload-867000            | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	| delete  | -p no-preload-867000                                   | no-preload-867000            | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:20 PST |
	| start   | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:20 PST | 13 Feb 24 19:22 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-815000            | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-815000                 | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:22 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:22 PST | 13 Feb 24 19:28 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | embed-certs-815000 image list                          | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	| delete  | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	| delete  | -p                                                     | disable-driver-mounts-377000 | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | disable-driver-mounts-377000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:29 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-069000  | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-069000       | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 19:29:44
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 19:29:44.882498   57047 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:29:44.882751   57047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:29:44.882758   57047 out.go:304] Setting ErrFile to fd 2...
	I0213 19:29:44.882763   57047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:29:44.882935   57047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:29:44.884317   57047 out.go:298] Setting JSON to false
	I0213 19:29:44.907027   57047 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":18243,"bootTime":1707863141,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:29:44.907134   57047 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:29:44.929127   57047 out.go:177] * [default-k8s-diff-port-069000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:29:44.971610   57047 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:29:44.971701   57047 notify.go:220] Checking for updates...
	I0213 19:29:44.994612   57047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:29:45.016618   57047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:29:45.038571   57047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:29:45.059561   57047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:29:45.081755   57047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:29:45.103690   57047 config.go:182] Loaded profile config "default-k8s-diff-port-069000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:29:45.104096   57047 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:29:45.203484   57047 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:29:45.203661   57047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:29:45.314939   57047 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:29:45.304411783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:29:45.357624   57047 out.go:177] * Using the docker driver based on existing profile
	I0213 19:29:45.380627   57047 start.go:298] selected driver: docker
	I0213 19:29:45.380656   57047 start.go:902] validating driver "docker" against &{Name:default-k8s-diff-port-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-069000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:29:45.380781   57047 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:29:45.385311   57047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:29:45.491359   57047 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:29:45.481408958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:29:45.491593   57047 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 19:29:45.491645   57047 cni.go:84] Creating CNI manager for ""
	I0213 19:29:45.491657   57047 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:29:45.491668   57047 start_flags.go:321] config:
	{Name:default-k8s-diff-port-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:29:45.551416   57047 out.go:177] * Starting control plane node default-k8s-diff-port-069000 in cluster default-k8s-diff-port-069000
	I0213 19:29:45.589081   57047 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:29:45.610250   57047 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:29:45.631399   57047 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 19:29:45.631494   57047 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:29:45.631498   57047 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 19:29:45.631525   57047 cache.go:56] Caching tarball of preloaded images
	I0213 19:29:45.631742   57047 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:29:45.631763   57047 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 19:29:45.632686   57047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/config.json ...
	I0213 19:29:45.684571   57047 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:29:45.684608   57047 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:29:45.684628   57047 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:29:45.684680   57047 start.go:365] acquiring machines lock for default-k8s-diff-port-069000: {Name:mkf472d562ea5c3973ee2d6fcfd32873efd8a3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:29:45.684873   57047 start.go:369] acquired machines lock for "default-k8s-diff-port-069000" in 172.326µs
	I0213 19:29:45.684898   57047 start.go:96] Skipping create...Using existing machine configuration
	I0213 19:29:45.684906   57047 fix.go:54] fixHost starting: 
	I0213 19:29:45.685134   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:29:45.735926   57047 fix.go:102] recreateIfNeeded on default-k8s-diff-port-069000: state=Stopped err=<nil>
	W0213 19:29:45.735957   57047 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 19:29:45.757546   57047 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-069000" ...
	I0213 19:29:45.800648   57047 cli_runner.go:164] Run: docker start default-k8s-diff-port-069000
	I0213 19:29:46.064868   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:29:46.126711   57047 kic.go:430] container "default-k8s-diff-port-069000" state is running.
	I0213 19:29:46.127855   57047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-069000
	I0213 19:29:46.189996   57047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/config.json ...
	I0213 19:29:46.190561   57047 machine.go:88] provisioning docker machine ...
	I0213 19:29:46.190605   57047 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-069000"
	I0213 19:29:46.190698   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:46.256385   57047 main.go:141] libmachine: Using SSH client type: native
	I0213 19:29:46.256728   57047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57725 <nil> <nil>}
	I0213 19:29:46.256741   57047 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-069000 && echo "default-k8s-diff-port-069000" | sudo tee /etc/hostname
	I0213 19:29:46.257778   57047 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 19:29:49.423148   57047 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-069000
	
	I0213 19:29:49.423235   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:49.475492   57047 main.go:141] libmachine: Using SSH client type: native
	I0213 19:29:49.475787   57047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57725 <nil> <nil>}
	I0213 19:29:49.475804   57047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-069000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-069000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-069000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:29:49.616911   57047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:29:49.616933   57047 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:29:49.616952   57047 ubuntu.go:177] setting up certificates
	I0213 19:29:49.616968   57047 provision.go:83] configureAuth start
	I0213 19:29:49.617051   57047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-069000
	I0213 19:29:49.668523   57047 provision.go:138] copyHostCerts
	I0213 19:29:49.668631   57047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:29:49.668643   57047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:29:49.668752   57047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:29:49.668995   57047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:29:49.669001   57047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:29:49.669079   57047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:29:49.669294   57047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:29:49.669300   57047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:29:49.669383   57047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:29:49.669528   57047 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-069000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-069000]
	I0213 19:29:49.729776   57047 provision.go:172] copyRemoteCerts
	I0213 19:29:49.729871   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:29:49.729930   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:49.781109   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:29:49.886871   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:29:49.948004   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 19:29:49.989465   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 19:29:50.037074   57047 provision.go:86] duration metric: configureAuth took 420.084455ms
	I0213 19:29:50.037093   57047 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:29:50.037247   57047 config.go:182] Loaded profile config "default-k8s-diff-port-069000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:29:50.037316   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:50.092555   57047 main.go:141] libmachine: Using SSH client type: native
	I0213 19:29:50.092871   57047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57725 <nil> <nil>}
	I0213 19:29:50.092882   57047 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:29:50.232549   57047 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:29:50.232564   57047 ubuntu.go:71] root file system type: overlay
	I0213 19:29:50.232648   57047 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:29:50.232721   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:50.284880   57047 main.go:141] libmachine: Using SSH client type: native
	I0213 19:29:50.285221   57047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57725 <nil> <nil>}
	I0213 19:29:50.285275   57047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:29:50.451928   57047 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:29:50.452027   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:50.504443   57047 main.go:141] libmachine: Using SSH client type: native
	I0213 19:29:50.504739   57047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57725 <nil> <nil>}
	I0213 19:29:50.504753   57047 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:29:50.658109   57047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:29:50.658132   57047 machine.go:91] provisioned docker machine in 4.467570643s
	I0213 19:29:50.658141   57047 start.go:300] post-start starting for "default-k8s-diff-port-069000" (driver="docker")
	I0213 19:29:50.658153   57047 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:29:50.658231   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:29:50.658292   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:50.709886   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:29:50.813854   57047 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:29:50.817883   57047 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:29:50.817909   57047 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:29:50.817917   57047 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:29:50.817924   57047 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:29:50.817940   57047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:29:50.818044   57047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:29:50.818235   57047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:29:50.818445   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:29:50.833062   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:29:50.872776   57047 start.go:303] post-start completed in 214.614572ms
	I0213 19:29:50.872857   57047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:29:50.872923   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:50.926101   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:29:51.023745   57047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:29:51.029373   57047 fix.go:56] fixHost completed within 5.344478631s
	I0213 19:29:51.029393   57047 start.go:83] releasing machines lock for "default-k8s-diff-port-069000", held for 5.344525078s
	I0213 19:29:51.029503   57047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-069000
	I0213 19:29:51.084365   57047 ssh_runner.go:195] Run: cat /version.json
	I0213 19:29:51.084375   57047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:29:51.084440   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:51.084443   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:51.141414   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:29:51.141753   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:29:51.347543   57047 ssh_runner.go:195] Run: systemctl --version
	I0213 19:29:51.352345   57047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 19:29:51.357551   57047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 19:29:51.387659   57047 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 19:29:51.387741   57047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 19:29:51.402800   57047 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 19:29:51.402821   57047 start.go:475] detecting cgroup driver to use...
	I0213 19:29:51.402834   57047 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:29:51.402954   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:29:51.430295   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 19:29:51.447329   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:29:51.463973   57047 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:29:51.464040   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:29:51.480701   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:29:51.497201   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:29:51.513922   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:29:51.530188   57047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:29:51.545843   57047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:29:51.562156   57047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:29:51.576778   57047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:29:51.591870   57047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:29:51.651124   57047 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:29:51.753162   57047 start.go:475] detecting cgroup driver to use...
	I0213 19:29:51.753190   57047 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:29:51.753255   57047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:29:51.773981   57047 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:29:51.774051   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:29:51.795450   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:29:51.828164   57047 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:29:51.835699   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:29:51.885741   57047 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:29:51.917931   57047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:29:52.016336   57047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:29:52.112706   57047 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:29:52.112797   57047 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:29:52.142125   57047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:29:52.203361   57047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:29:52.501922   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 19:29:52.520917   57047 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 19:29:52.542371   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:29:52.560477   57047 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 19:29:52.625790   57047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 19:29:52.688291   57047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:29:52.754019   57047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 19:29:52.787336   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:29:52.804422   57047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:29:52.869116   57047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 19:29:52.957668   57047 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 19:29:52.957854   57047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 19:29:52.962672   57047 start.go:543] Will wait 60s for crictl version
	I0213 19:29:52.962739   57047 ssh_runner.go:195] Run: which crictl
	I0213 19:29:52.966995   57047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 19:29:53.021185   57047 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 19:29:53.021270   57047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:29:53.043481   57047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:29:53.091615   57047 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 19:29:53.091800   57047 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-069000 dig +short host.docker.internal
	I0213 19:29:53.210288   57047 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:29:53.210473   57047 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:29:53.215604   57047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:29:53.234030   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:53.295691   57047 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 19:29:53.295781   57047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:29:53.319258   57047 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 19:29:53.319285   57047 docker.go:615] Images already preloaded, skipping extraction
	I0213 19:29:53.319394   57047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:29:53.339273   57047 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 19:29:53.339294   57047 cache_images.go:84] Images are preloaded, skipping loading
	I0213 19:29:53.339414   57047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:29:53.386776   57047 cni.go:84] Creating CNI manager for ""
	I0213 19:29:53.386796   57047 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:29:53.386810   57047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 19:29:53.386831   57047 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-069000 NodeName:default-k8s-diff-port-069000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 19:29:53.386951   57047 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-069000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:29:53.387031   57047 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-069000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 19:29:53.387104   57047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 19:29:53.402061   57047 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:29:53.402164   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:29:53.417754   57047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0213 19:29:53.446264   57047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 19:29:53.475714   57047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0213 19:29:53.505652   57047 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:29:53.510345   57047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:29:53.529276   57047 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000 for IP: 192.168.67.2
	I0213 19:29:53.529298   57047 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:29:53.529484   57047 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:29:53.529564   57047 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:29:53.529654   57047 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.key
	I0213 19:29:53.529740   57047 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/apiserver.key.c7fa3a9e
	I0213 19:29:53.529814   57047 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/proxy-client.key
	I0213 19:29:53.530041   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:29:53.530087   57047 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:29:53.530097   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:29:53.530133   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:29:53.530176   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:29:53.530205   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:29:53.530290   57047 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:29:53.530928   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:29:53.571957   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 19:29:53.611901   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:29:53.653311   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 19:29:53.694933   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:29:53.736603   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:29:53.777918   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:29:53.819739   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:29:53.861084   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:29:53.903000   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:29:53.944659   57047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:29:53.988352   57047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:29:54.021726   57047 ssh_runner.go:195] Run: openssl version
	I0213 19:29:54.029353   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:29:54.048146   57047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:29:54.052791   57047 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:29:54.052855   57047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:29:54.059576   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:29:54.075404   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:29:54.092096   57047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:29:54.096958   57047 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:29:54.097043   57047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:29:54.104045   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:29:54.119268   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:29:54.135167   57047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:29:54.141176   57047 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:29:54.141226   57047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:29:54.147797   57047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:29:54.162590   57047 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:29:54.166843   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:29:54.173178   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:29:54.180953   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:29:54.187314   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:29:54.193747   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:29:54.200349   57047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:29:54.206995   57047 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-069000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:29:54.207098   57047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:29:54.225774   57047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:29:54.240789   57047 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:29:54.240807   57047 kubeadm.go:636] restartCluster start
	I0213 19:29:54.240895   57047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:29:54.255733   57047 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:54.255814   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:29:54.311445   57047 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-069000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:29:54.311600   57047 kubeconfig.go:146] "default-k8s-diff-port-069000" context is missing from /Users/jenkins/minikube-integration/18165-38421/kubeconfig - will repair!
	I0213 19:29:54.311948   57047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:29:54.313494   57047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:29:54.328850   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:54.328920   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:54.344829   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:54.829603   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:54.829783   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:54.849159   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:55.331007   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:55.331172   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:55.350603   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:55.830691   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:55.830809   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:55.851225   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:56.328959   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:56.329039   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:56.346078   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:56.830258   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:56.830394   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:56.849165   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:57.330946   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:57.331049   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:57.350382   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:57.830102   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:57.830297   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:57.848661   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:58.329766   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:58.329981   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:58.348544   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:58.830979   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:58.831138   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:58.850054   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:59.329738   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:59.329819   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:59.346504   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:29:59.829504   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:29:59.829655   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:29:59.850236   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:00.329023   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:00.329112   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:00.350231   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:00.829599   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:00.829744   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:00.848350   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:01.329695   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:01.329857   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:01.347869   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:01.829679   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:01.829823   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:01.848011   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:02.329100   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:02.329199   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:02.347494   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:02.829410   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:02.829571   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:02.848977   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:03.330970   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:03.331178   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:03.351089   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:03.829678   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:03.829846   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:03.848067   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:04.329472   57047 api_server.go:166] Checking apiserver status ...
	I0213 19:30:04.329597   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:30:04.347163   57047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:04.347182   57047 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 19:30:04.347194   57047 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:30:04.347268   57047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:30:04.368056   57047 docker.go:483] Stopping containers: [153bd191fe48 dd326e337ed4 358ad1671883 7ecacb40e1d0 9c6434cf6225 e57bfbb6002e 557be47dface a5c58f49a190 2639a099dd6f 7a1fbfce72bb c78159a2ce2e 9d91a7b8535e ba68a81a6795 f972917f904e 246a9dc527d1 f0ea7ebdd454]
	I0213 19:30:04.368142   57047 ssh_runner.go:195] Run: docker stop 153bd191fe48 dd326e337ed4 358ad1671883 7ecacb40e1d0 9c6434cf6225 e57bfbb6002e 557be47dface a5c58f49a190 2639a099dd6f 7a1fbfce72bb c78159a2ce2e 9d91a7b8535e ba68a81a6795 f972917f904e 246a9dc527d1 f0ea7ebdd454
	I0213 19:30:04.387489   57047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:30:04.404879   57047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:30:04.419647   57047 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 14 03:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 03:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 14 03:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 03:28 /etc/kubernetes/scheduler.conf
	
	I0213 19:30:04.419712   57047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0213 19:30:04.435653   57047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0213 19:30:04.450453   57047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0213 19:30:04.464948   57047 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:04.465022   57047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 19:30:04.481290   57047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0213 19:30:04.499142   57047 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:30:04.499211   57047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 19:30:04.516688   57047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:30:04.534166   57047 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:30:04.534183   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:04.592543   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:05.156428   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:05.311840   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:05.386445   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:05.498482   57047 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:30:05.498602   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:30:05.999565   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:30:06.498727   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:30:06.580961   57047 api_server.go:72] duration metric: took 1.082478207s to wait for apiserver process to appear ...
	I0213 19:30:06.580984   57047 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:30:06.581010   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:09.299846   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 19:30:09.299887   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 19:30:09.299903   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:09.480055   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:30:09.480092   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:30:09.581898   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:09.587863   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:30:09.587881   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:30:10.081106   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:10.088709   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:30:10.088728   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:30:10.581070   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:10.593441   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:30:10.593464   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:30:11.081150   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:11.089950   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:30:11.089988   57047 api_server.go:103] status: https://127.0.0.1:57729/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:30:11.581864   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:30:11.588681   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 200:
	ok
	I0213 19:30:11.596075   57047 api_server.go:141] control plane version: v1.28.4
	I0213 19:30:11.596092   57047 api_server.go:131] duration metric: took 5.015113441s to wait for apiserver health ...
	I0213 19:30:11.596100   57047 cni.go:84] Creating CNI manager for ""
	I0213 19:30:11.596110   57047 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:30:11.620605   57047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 19:30:11.642754   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 19:30:11.658950   57047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 19:30:11.687483   57047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:30:11.695262   57047 system_pods.go:59] 8 kube-system pods found
	I0213 19:30:11.695281   57047 system_pods.go:61] "coredns-5dd5756b68-nh8dh" [5dd6dc74-5d75-4d55-b37e-f58f91a9e104] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 19:30:11.695287   57047 system_pods.go:61] "etcd-default-k8s-diff-port-069000" [242c3413-91c4-4743-8b8c-ea2fcff6e9a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:30:11.695294   57047 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-069000" [0e40301c-4d41-422d-b284-04cee4be7bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:30:11.695303   57047 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-069000" [d9e45501-04ea-4834-9eea-009e471bbe15] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:30:11.695309   57047 system_pods.go:61] "kube-proxy-f9rnz" [19eabc39-1a25-4c23-a31b-1b3b19d9fb35] Running
	I0213 19:30:11.695313   57047 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-069000" [8392241e-59e9-4c35-bf89-408de09af74f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:30:11.695320   57047 system_pods.go:61] "metrics-server-57f55c9bc5-fvsc7" [bbc3b9e0-4781-4e34-9928-2749f498c949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:30:11.695324   57047 system_pods.go:61] "storage-provisioner" [ee6fed1f-f99b-4baa-b41a-0c0331816659] Running
	I0213 19:30:11.695329   57047 system_pods.go:74] duration metric: took 7.831218ms to wait for pod list to return data ...
	I0213 19:30:11.695335   57047 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:30:11.698602   57047 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:30:11.698617   57047 node_conditions.go:123] node cpu capacity is 12
	I0213 19:30:11.698627   57047 node_conditions.go:105] duration metric: took 3.288908ms to run NodePressure ...
	I0213 19:30:11.698639   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:30:11.836385   57047 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 19:30:11.840469   57047 kubeadm.go:787] kubelet initialised
	I0213 19:30:11.840481   57047 kubeadm.go:788] duration metric: took 4.080208ms waiting for restarted kubelet to initialise ...
	I0213 19:30:11.840488   57047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 19:30:11.846906   57047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:13.854155   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:16.352895   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:18.855540   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:21.353161   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:23.853109   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:25.854328   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:27.856085   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:30.355478   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:32.854549   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:34.855131   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:36.856869   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:39.354844   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:41.854691   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:44.354318   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:46.855635   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:49.354552   57047 pod_ready.go:102] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:50.353384   57047 pod_ready.go:92] pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.353396   57047 pod_ready.go:81] duration metric: took 38.506573668s waiting for pod "coredns-5dd5756b68-nh8dh" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.353406   57047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.358132   57047 pod_ready.go:92] pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.358143   57047 pod_ready.go:81] duration metric: took 4.717243ms waiting for pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.358149   57047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.362429   57047 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.362439   57047 pod_ready.go:81] duration metric: took 4.285029ms waiting for pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.362448   57047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.367286   57047 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.367297   57047 pod_ready.go:81] duration metric: took 4.843821ms waiting for pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.367304   57047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f9rnz" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.371937   57047 pod_ready.go:92] pod "kube-proxy-f9rnz" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.371950   57047 pod_ready.go:81] duration metric: took 4.63965ms waiting for pod "kube-proxy-f9rnz" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.371956   57047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.752347   57047 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:30:50.752361   57047 pod_ready.go:81] duration metric: took 380.393639ms waiting for pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:50.752368   57047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace to be "Ready" ...
	I0213 19:30:52.757889   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:54.759951   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:57.258461   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:30:59.259758   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:01.759203   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:04.259338   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:06.259761   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:08.759284   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:10.760255   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:13.261430   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:15.760780   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:17.761739   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:20.258315   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:22.259765   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:24.759695   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:27.261613   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:29.758073   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:31.758317   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:34.261476   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:36.831581   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:39.329419   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:41.828875   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:43.830106   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:46.330723   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:48.828655   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:50.832186   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:53.330660   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:55.829632   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:31:58.328573   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:00.329355   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:02.830068   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:04.831087   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:06.855161   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:09.330152   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:11.830777   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:14.329916   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:16.330541   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:18.830175   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:20.833649   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:23.329957   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:25.330035   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:27.829838   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:29.830377   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:32.329171   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:34.832295   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:37.329138   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:39.829910   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:41.830181   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:43.830242   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:46.331432   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:48.829785   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:51.330752   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:53.829425   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:55.830315   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:32:58.330174   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:00.331011   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:02.830260   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:05.330515   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:07.829522   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:09.832541   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:12.328918   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:14.330675   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:16.831355   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:19.331273   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:21.830710   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:24.330607   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:26.828948   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:28.831911   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:31.330457   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:33.332597   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:35.829907   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:37.831222   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:39.832123   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:42.329463   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:44.829307   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:46.830022   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:48.833446   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:51.334545   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:53.830664   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:56.332202   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:33:58.830254   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:00.831245   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:03.330479   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:05.832035   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:08.329938   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:10.831950   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:13.330954   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:15.831868   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:17.833096   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:20.331485   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:22.832538   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:25.329805   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:27.330460   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:29.331489   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:31.831588   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:34.330036   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:36.330287   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:38.353814   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:40.830340   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:42.831496   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:45.332090   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:47.333217   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:49.831586   57047 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace has status "Ready":"False"
	I0213 19:34:50.825961   57047 pod_ready.go:81] duration metric: took 4m0.001594462s waiting for pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace to be "Ready" ...
	E0213 19:34:50.826005   57047 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fvsc7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 19:34:50.826023   57047 pod_ready.go:38] duration metric: took 4m38.913649103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 19:34:50.826065   57047 kubeadm.go:640] restartCluster took 4m56.513415677s
	W0213 19:34:50.826128   57047 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 19:34:50.826159   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 19:34:57.458781   57047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.632535358s)
	I0213 19:34:57.458846   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:34:57.475839   57047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:34:57.491430   57047 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 19:34:57.491490   57047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:34:57.506815   57047 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 19:34:57.506845   57047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 19:34:57.553366   57047 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 19:34:57.553419   57047 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 19:34:57.678112   57047 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 19:34:57.678309   57047 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 19:34:57.678436   57047 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 19:34:57.985081   57047 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 19:34:58.010809   57047 out.go:204]   - Generating certificates and keys ...
	I0213 19:34:58.010875   57047 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 19:34:58.010932   57047 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 19:34:58.011042   57047 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 19:34:58.011108   57047 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 19:34:58.011170   57047 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 19:34:58.011227   57047 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 19:34:58.011279   57047 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 19:34:58.011365   57047 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 19:34:58.011438   57047 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 19:34:58.011508   57047 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 19:34:58.011542   57047 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 19:34:58.011592   57047 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 19:34:58.089880   57047 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 19:34:58.317800   57047 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 19:34:58.403600   57047 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 19:34:58.482958   57047 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 19:34:58.483445   57047 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 19:34:58.485344   57047 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 19:34:58.506934   57047 out.go:204]   - Booting up control plane ...
	I0213 19:34:58.507029   57047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 19:34:58.507152   57047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 19:34:58.507247   57047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 19:34:58.507470   57047 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 19:34:58.507551   57047 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 19:34:58.507622   57047 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 19:34:58.595919   57047 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 19:35:03.598943   57047 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003268 seconds
	I0213 19:35:03.599038   57047 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 19:35:03.607569   57047 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 19:35:04.126897   57047 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 19:35:04.127070   57047 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-069000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 19:35:04.635596   57047 kubeadm.go:322] [bootstrap-token] Using token: 2nlufn.leuxs0br3m0go6l0
	I0213 19:35:04.675003   57047 out.go:204]   - Configuring RBAC rules ...
	I0213 19:35:04.675173   57047 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 19:35:04.678173   57047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 19:35:04.718316   57047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 19:35:04.721527   57047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 19:35:04.723961   57047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 19:35:04.726561   57047 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 19:35:04.735626   57047 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 19:35:04.887979   57047 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 19:35:05.084267   57047 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 19:35:05.085152   57047 kubeadm.go:322] 
	I0213 19:35:05.085211   57047 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 19:35:05.085222   57047 kubeadm.go:322] 
	I0213 19:35:05.085353   57047 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 19:35:05.085366   57047 kubeadm.go:322] 
	I0213 19:35:05.085395   57047 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 19:35:05.085471   57047 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 19:35:05.085564   57047 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 19:35:05.085574   57047 kubeadm.go:322] 
	I0213 19:35:05.085640   57047 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 19:35:05.085654   57047 kubeadm.go:322] 
	I0213 19:35:05.085722   57047 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 19:35:05.085737   57047 kubeadm.go:322] 
	I0213 19:35:05.085816   57047 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 19:35:05.085941   57047 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 19:35:05.086072   57047 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 19:35:05.086098   57047 kubeadm.go:322] 
	I0213 19:35:05.086198   57047 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 19:35:05.086300   57047 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 19:35:05.086317   57047 kubeadm.go:322] 
	I0213 19:35:05.086419   57047 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 2nlufn.leuxs0br3m0go6l0 \
	I0213 19:35:05.086537   57047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:37f5d29b605db0b241ae071f2c67ba54403aaba5987d1730ec948834f9a4aa2b \
	I0213 19:35:05.086563   57047 kubeadm.go:322] 	--control-plane 
	I0213 19:35:05.086571   57047 kubeadm.go:322] 
	I0213 19:35:05.086727   57047 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 19:35:05.086744   57047 kubeadm.go:322] 
	I0213 19:35:05.086848   57047 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 2nlufn.leuxs0br3m0go6l0 \
	I0213 19:35:05.087015   57047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:37f5d29b605db0b241ae071f2c67ba54403aaba5987d1730ec948834f9a4aa2b 
	I0213 19:35:05.093780   57047 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0213 19:35:05.093881   57047 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 19:35:05.093915   57047 cni.go:84] Creating CNI manager for ""
	I0213 19:35:05.093927   57047 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:35:05.115824   57047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 19:35:05.137858   57047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 19:35:05.176227   57047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 19:35:05.287889   57047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 19:35:05.287981   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:05.287983   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d minikube.k8s.io/name=default-k8s-diff-port-069000 minikube.k8s.io/updated_at=2024_02_13T19_35_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:05.412515   57047 ops.go:34] apiserver oom_adj: -16
	I0213 19:35:05.412565   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:05.913694   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:06.413679   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:06.913830   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:07.413038   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:07.912844   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:08.412945   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:08.912690   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:09.413112   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:09.912781   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:10.412774   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:10.912852   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:11.412659   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:11.912838   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:12.412729   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:12.913112   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:13.412754   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:13.912863   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:14.412960   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:14.914181   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:15.412913   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:15.913731   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:16.412844   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:16.912721   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:17.413211   57047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 19:35:17.499110   57047 kubeadm.go:1088] duration metric: took 12.211096974s to wait for elevateKubeSystemPrivileges.
	I0213 19:35:17.499128   57047 kubeadm.go:406] StartCluster complete in 5m23.220039592s
	I0213 19:35:17.499146   57047 settings.go:142] acquiring lock: {Name:mke46562c9f92468d93bd6cd756238f74ba38936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:35:17.499232   57047 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:35:17.499828   57047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:35:17.500104   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 19:35:17.500124   57047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 19:35:17.500177   57047 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-069000"
	I0213 19:35:17.500189   57047 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-069000"
	I0213 19:35:17.500198   57047 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-069000"
	I0213 19:35:17.500200   57047 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-069000"
	W0213 19:35:17.500206   57047 addons.go:243] addon dashboard should already be in state true
	I0213 19:35:17.500179   57047 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-069000"
	I0213 19:35:17.500235   57047 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-069000"
	W0213 19:35:17.500245   57047 addons.go:243] addon storage-provisioner should already be in state true
	I0213 19:35:17.500183   57047 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-069000"
	I0213 19:35:17.500249   57047 host.go:66] Checking if "default-k8s-diff-port-069000" exists ...
	I0213 19:35:17.500252   57047 config.go:182] Loaded profile config "default-k8s-diff-port-069000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 19:35:17.500265   57047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-069000"
	I0213 19:35:17.500270   57047 host.go:66] Checking if "default-k8s-diff-port-069000" exists ...
	W0213 19:35:17.500208   57047 addons.go:243] addon metrics-server should already be in state true
	I0213 19:35:17.500315   57047 host.go:66] Checking if "default-k8s-diff-port-069000" exists ...
	I0213 19:35:17.500652   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:35:17.500712   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:35:17.500724   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:35:17.500821   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:35:17.600625   57047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 19:35:17.578308   57047 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-069000"
	W0213 19:35:17.600709   57047 addons.go:243] addon default-storageclass should already be in state true
	I0213 19:35:17.710579   57047 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 19:35:17.636748   57047 host.go:66] Checking if "default-k8s-diff-port-069000" exists ...
	I0213 19:35:17.636776   57047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:35:17.673476   57047 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0213 19:35:17.682280   57047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 19:35:17.748807   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 19:35:17.748919   57047 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 19:35:17.788496   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 19:35:17.749019   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:35:17.749687   57047 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-069000 --format={{.State.Status}}
	I0213 19:35:17.788622   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:35:17.825608   57047 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0213 19:35:17.863974   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0213 19:35:17.864012   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0213 19:35:17.864161   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:35:17.910220   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:35:17.910409   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:35:17.912579   57047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 19:35:17.912593   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 19:35:17.912677   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:35:17.940916   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:35:17.980362   57047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57725 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/default-k8s-diff-port-069000/id_rsa Username:docker}
	I0213 19:35:18.057155   57047 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-069000" context rescaled to 1 replicas
	I0213 19:35:18.057181   57047 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 19:35:18.079618   57047 out.go:177] * Verifying Kubernetes components...
	I0213 19:35:18.120699   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:35:18.289937   57047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 19:35:18.289957   57047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 19:35:18.289967   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 19:35:18.372670   57047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 19:35:18.372692   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 19:35:18.381044   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0213 19:35:18.381060   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0213 19:35:18.381108   57047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:35:18.474279   57047 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 19:35:18.474293   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 19:35:18.481310   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0213 19:35:18.481328   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0213 19:35:18.664619   57047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 19:35:18.674189   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0213 19:35:18.674206   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0213 19:35:18.860811   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0213 19:35:18.860830   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0213 19:35:18.974594   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0213 19:35:18.974629   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0213 19:35:19.163983   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0213 19:35:19.164004   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0213 19:35:19.363817   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0213 19:35:19.363836   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0213 19:35:19.484258   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0213 19:35:19.484275   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0213 19:35:19.576663   57047 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.827882766s)
	I0213 19:35:19.576689   57047 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0213 19:35:19.576724   57047 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.455958801s)
	I0213 19:35:19.576772   57047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286755215s)
	I0213 19:35:19.576876   57047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-069000
	I0213 19:35:19.633418   57047 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-069000" to be "Ready" ...
	I0213 19:35:19.654404   57047 node_ready.go:49] node "default-k8s-diff-port-069000" has status "Ready":"True"
	I0213 19:35:19.654433   57047 node_ready.go:38] duration metric: took 20.980058ms waiting for node "default-k8s-diff-port-069000" to be "Ready" ...
	I0213 19:35:19.654442   57047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 19:35:19.668098   57047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s7skm" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:19.675105   57047 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 19:35:19.675129   57047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0213 19:35:19.861147   57047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 19:35:20.461273   57047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.080095363s)
	I0213 19:35:20.478713   57047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.814010809s)
	I0213 19:35:20.478744   57047 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-069000"
	I0213 19:35:21.180957   57047 pod_ready.go:92] pod "coredns-5dd5756b68-s7skm" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.180986   57047 pod_ready.go:81] duration metric: took 1.512855163s waiting for pod "coredns-5dd5756b68-s7skm" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.180994   57047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z42ks" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.184277   57047 pod_ready.go:97] error getting pod "coredns-5dd5756b68-z42ks" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z42ks" not found
	I0213 19:35:21.184292   57047 pod_ready.go:81] duration metric: took 3.293939ms waiting for pod "coredns-5dd5756b68-z42ks" in "kube-system" namespace to be "Ready" ...
	E0213 19:35:21.184301   57047 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-z42ks" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z42ks" not found
	I0213 19:35:21.184327   57047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.193908   57047 pod_ready.go:92] pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.193939   57047 pod_ready.go:81] duration metric: took 9.601653ms waiting for pod "etcd-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.193967   57047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.255372   57047 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.255387   57047 pod_ready.go:81] duration metric: took 61.410241ms waiting for pod "kube-apiserver-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.255396   57047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.262594   57047 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.262607   57047 pod_ready.go:81] duration metric: took 7.20436ms waiting for pod "kube-controller-manager-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.262634   57047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zksbm" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.457270   57047 pod_ready.go:92] pod "kube-proxy-zksbm" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.457288   57047 pod_ready.go:81] duration metric: took 194.645866ms waiting for pod "kube-proxy-zksbm" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.457299   57047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.681122   57047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.819909344s)
	I0213 19:35:21.707227   57047 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-069000 addons enable metrics-server
	
	I0213 19:35:21.728779   57047 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0213 19:35:21.751253   57047 addons.go:505] enable addons completed in 4.251084112s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0213 19:35:21.838202   57047 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace has status "Ready":"True"
	I0213 19:35:21.838214   57047 pod_ready.go:81] duration metric: took 380.90509ms waiting for pod "kube-scheduler-default-k8s-diff-port-069000" in "kube-system" namespace to be "Ready" ...
	I0213 19:35:21.838220   57047 pod_ready.go:38] duration metric: took 2.183744505s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 19:35:21.838233   57047 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:35:21.838291   57047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:35:21.855411   57047 api_server.go:72] duration metric: took 3.798168782s to wait for apiserver process to appear ...
	I0213 19:35:21.855425   57047 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:35:21.855437   57047 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57729/healthz ...
	I0213 19:35:21.860869   57047 api_server.go:279] https://127.0.0.1:57729/healthz returned 200:
	ok
	I0213 19:35:21.862235   57047 api_server.go:141] control plane version: v1.28.4
	I0213 19:35:21.862268   57047 api_server.go:131] duration metric: took 6.838315ms to wait for apiserver health ...
	I0213 19:35:21.862274   57047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:35:22.039416   57047 system_pods.go:59] 8 kube-system pods found
	I0213 19:35:22.039430   57047 system_pods.go:61] "coredns-5dd5756b68-s7skm" [4b763c5f-5816-4e26-b33e-7351c736d3c6] Running
	I0213 19:35:22.039434   57047 system_pods.go:61] "etcd-default-k8s-diff-port-069000" [b0e32284-edf0-4e8e-b06a-8ea0fde68981] Running
	I0213 19:35:22.039438   57047 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-069000" [f5849c9e-7d3f-49df-afd9-390f6f90fa63] Running
	I0213 19:35:22.039442   57047 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-069000" [1beff814-7edd-4d44-b87d-83410587bdbc] Running
	I0213 19:35:22.039445   57047 system_pods.go:61] "kube-proxy-zksbm" [82442d3c-6172-4ff0-b15f-e5f0a4caef81] Running
	I0213 19:35:22.039448   57047 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-069000" [f5864638-8fb8-4c86-bd7d-3b82cbf0f129] Running
	I0213 19:35:22.039453   57047 system_pods.go:61] "metrics-server-57f55c9bc5-xnqqx" [2466b190-c1ae-4a37-adc4-7b0753b1ff9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:35:22.039458   57047 system_pods.go:61] "storage-provisioner" [07bf44a3-150b-4686-bb79-b0a69e2caab7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 19:35:22.039464   57047 system_pods.go:74] duration metric: took 177.185302ms to wait for pod list to return data ...
	I0213 19:35:22.039470   57047 default_sa.go:34] waiting for default service account to be created ...
	I0213 19:35:22.237652   57047 default_sa.go:45] found service account: "default"
	I0213 19:35:22.237678   57047 default_sa.go:55] duration metric: took 198.200126ms for default service account to be created ...
	I0213 19:35:22.237695   57047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 19:35:22.441033   57047 system_pods.go:86] 8 kube-system pods found
	I0213 19:35:22.441048   57047 system_pods.go:89] "coredns-5dd5756b68-s7skm" [4b763c5f-5816-4e26-b33e-7351c736d3c6] Running
	I0213 19:35:22.441052   57047 system_pods.go:89] "etcd-default-k8s-diff-port-069000" [b0e32284-edf0-4e8e-b06a-8ea0fde68981] Running
	I0213 19:35:22.441056   57047 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-069000" [f5849c9e-7d3f-49df-afd9-390f6f90fa63] Running
	I0213 19:35:22.441059   57047 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-069000" [1beff814-7edd-4d44-b87d-83410587bdbc] Running
	I0213 19:35:22.441063   57047 system_pods.go:89] "kube-proxy-zksbm" [82442d3c-6172-4ff0-b15f-e5f0a4caef81] Running
	I0213 19:35:22.441066   57047 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-069000" [f5864638-8fb8-4c86-bd7d-3b82cbf0f129] Running
	I0213 19:35:22.441071   57047 system_pods.go:89] "metrics-server-57f55c9bc5-xnqqx" [2466b190-c1ae-4a37-adc4-7b0753b1ff9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:35:22.441077   57047 system_pods.go:89] "storage-provisioner" [07bf44a3-150b-4686-bb79-b0a69e2caab7] Running
	I0213 19:35:22.441082   57047 system_pods.go:126] duration metric: took 203.380701ms to wait for k8s-apps to be running ...
	I0213 19:35:22.441089   57047 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 19:35:22.441142   57047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:35:22.460033   57047 system_svc.go:56] duration metric: took 18.939447ms WaitForService to wait for kubelet.
	I0213 19:35:22.460062   57047 kubeadm.go:581] duration metric: took 4.402817338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 19:35:22.460074   57047 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:35:22.640165   57047 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:35:22.640178   57047 node_conditions.go:123] node cpu capacity is 12
	I0213 19:35:22.640195   57047 node_conditions.go:105] duration metric: took 180.109249ms to run NodePressure ...
	I0213 19:35:22.640204   57047 start.go:228] waiting for startup goroutines ...
	I0213 19:35:22.640209   57047 start.go:233] waiting for cluster config update ...
	I0213 19:35:22.640222   57047 start.go:242] writing updated cluster config ...
	I0213 19:35:22.640525   57047 ssh_runner.go:195] Run: rm -f paused
	I0213 19:35:22.683607   57047 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 19:35:22.707953   57047 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-069000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.038417857Z" level=info msg="Loading containers: start."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.126201672Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.165065133Z" level=info msg="Loading containers: done."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172861120Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172921650Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192080634Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:14 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192148976Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.560647039Z" level=info msg="Processing signal 'terminated'"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.561661074Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.562254523Z" level=info msg="Daemon shutdown complete"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.624136291Z" level=info msg="Starting up"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.631799867Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.875735799Z" level=info msg="Loading containers: start."
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.970706234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.007890871Z" level=info msg="Loading containers: done."
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015528496Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015592390Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033874246Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033915504Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:23 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-14T03:35:38Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:35:38 up  2:14,  0 users,  load average: 4.47, 4.97, 5.03
	Linux old-k8s-version-187000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 03:35:37 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 836.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: I0214 03:35:38.150470   31061 server.go:410] Version: v1.16.0
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: I0214 03:35:38.150774   31061 plugins.go:100] No cloud provider specified.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: I0214 03:35:38.150786   31061 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: I0214 03:35:38.152580   31061 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: W0214 03:35:38.153284   31061 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: W0214 03:35:38.153352   31061 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31061]: F0214 03:35:38.153375   31061 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 837.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: I0214 03:35:38.895521   31170 server.go:410] Version: v1.16.0
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: I0214 03:35:38.895827   31170 plugins.go:100] No cloud provider specified.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: I0214 03:35:38.895875   31170 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: I0214 03:35:38.897717   31170 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: W0214 03:35:38.898356   31170 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: W0214 03:35:38.898421   31170 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:35:38 old-k8s-version-187000 kubelet[31170]: F0214 03:35:38.898443   31170 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:35:38 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (417.844462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-187000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (383.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:36:17.793116   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:36:21.772499   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:37:03.385321   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:37:46.778986   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:37:49.345934   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:39:18.172590   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:39:19.135123   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.140881   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.151168   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.173313   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.215474   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.296569   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.457217   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:19.778764   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:39:20.421037   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:21.701203   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:22.429871   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:39:22.456917   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:39:24.263391   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:39:29.383694   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:39:34.129185   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:39:39.624885   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:39:40.517109   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:40:00.107308   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:40:23.997015   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:40:41.068258   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
E0213 19:40:45.503734   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:40:47.877856   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:41:17.796073   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 19:41:21.777267   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57240/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (396.3254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-187000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.247µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-187000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-187000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f",
	        "Created": "2024-02-14T03:12:04.577549374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:18:07.857189614Z",
	            "FinishedAt": "2024-02-14T03:18:05.063436769Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f/e0b9362b2efd09022b870091b98cdfc206a0380e70d1b74c3dc42ceb3e098e5f-json.log",
	        "Name": "/old-k8s-version-187000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72-init/diff:/var/lib/docker/overlay2/3ed0de4aac6b7e329f9acd865d0c22fc7cd3ad67bb85f95f8605165150fb68c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c809f08c3fe15c84721952f204c528844488e74d4d3422d3f2c83b56532db72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d045fd4df5a483f35dc86c4e54cb8d1019191d338bf04e887522b7ef448b5799",
	            "SandboxKey": "/var/run/docker/netns/d045fd4df5a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57241"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57239"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57240"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b9362b2efd",
	                        "old-k8s-version-187000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "4cb1b8693c9780c94ad8de0e0072aef11b304b625a6e68f12739c271830cb055",
	                    "EndpointID": "6d54b12cf11b7964d3b93a05495e26e918bbcb712685fd590903de422b0b5cf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-187000",
	                        "e0b9362b2efd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (397.14207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-187000 logs -n 25: (1.366276405s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	| delete  | -p embed-certs-815000                                  | embed-certs-815000           | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	| delete  | -p                                                     | disable-driver-mounts-377000 | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:28 PST |
	|         | disable-driver-mounts-377000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:28 PST | 13 Feb 24 19:29 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-069000  | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-069000       | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:29 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:29 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-069000                           | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:35 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-069000 | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:35 PST |
	|         | default-k8s-diff-port-069000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-886000 --memory=2200 --alsologtostderr   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:35 PST | 13 Feb 24 19:36 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-886000             | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:36 PST | 13 Feb 24 19:36 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-886000                                   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:36 PST | 13 Feb 24 19:36 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-886000                  | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:36 PST | 13 Feb 24 19:36 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-886000 --memory=2200 --alsologtostderr   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:36 PST | 13 Feb 24 19:37 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-886000 image list                           | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:37 PST | 13 Feb 24 19:37 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-886000                                   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:37 PST | 13 Feb 24 19:37 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-886000                                   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:37 PST | 13 Feb 24 19:37 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-886000                                   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:37 PST | 13 Feb 24 19:37 PST |
	| delete  | -p newest-cni-886000                                   | newest-cni-886000            | jenkins | v1.32.0 | 13 Feb 24 19:37 PST | 13 Feb 24 19:37 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 19:36:42
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 19:36:42.142764   57526 out.go:291] Setting OutFile to fd 1 ...
	I0213 19:36:42.143055   57526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:36:42.143061   57526 out.go:304] Setting ErrFile to fd 2...
	I0213 19:36:42.143065   57526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 19:36:42.143250   57526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 19:36:42.144677   57526 out.go:298] Setting JSON to false
	I0213 19:36:42.167981   57526 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":18661,"bootTime":1707863141,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 19:36:42.168093   57526 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 19:36:42.190479   57526 out.go:177] * [newest-cni-886000] minikube v1.32.0 on Darwin 14.3.1
	I0213 19:36:42.233095   57526 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 19:36:42.233180   57526 notify.go:220] Checking for updates...
	I0213 19:36:42.276637   57526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:36:42.298001   57526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 19:36:42.321037   57526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 19:36:42.341883   57526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 19:36:42.383951   57526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 19:36:42.409513   57526 config.go:182] Loaded profile config "newest-cni-886000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:36:42.410122   57526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 19:36:42.468840   57526 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 19:36:42.469016   57526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:36:42.576983   57526 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:36:42.566863896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:36:42.619049   57526 out.go:177] * Using the docker driver based on existing profile
	I0213 19:36:42.641733   57526 start.go:298] selected driver: docker
	I0213 19:36:42.641756   57526 start.go:902] validating driver "docker" against &{Name:newest-cni-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-886000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:36:42.641882   57526 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 19:36:42.645475   57526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 19:36:42.752514   57526 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 03:36:42.741687837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 19:36:42.752872   57526 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 19:36:42.752953   57526 cni.go:84] Creating CNI manager for ""
	I0213 19:36:42.752983   57526 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:36:42.753013   57526 start_flags.go:321] config:
	{Name:newest-cni-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-886000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:36:42.775005   57526 out.go:177] * Starting control plane node newest-cni-886000 in cluster newest-cni-886000
	I0213 19:36:42.796638   57526 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 19:36:42.817374   57526 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 19:36:42.859575   57526 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 19:36:42.859593   57526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 19:36:42.859645   57526 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 19:36:42.859659   57526 cache.go:56] Caching tarball of preloaded images
	I0213 19:36:42.859768   57526 preload.go:174] Found /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 19:36:42.859777   57526 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 19:36:42.860383   57526 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/config.json ...
	I0213 19:36:42.914893   57526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 19:36:42.914916   57526 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 19:36:42.914931   57526 cache.go:194] Successfully downloaded all kic artifacts
	I0213 19:36:42.914965   57526 start.go:365] acquiring machines lock for newest-cni-886000: {Name:mk01c7dd766326fae3fa3153dea29212b149cbd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 19:36:42.915052   57526 start.go:369] acquired machines lock for "newest-cni-886000" in 69.645µs
	I0213 19:36:42.915075   57526 start.go:96] Skipping create...Using existing machine configuration
	I0213 19:36:42.915083   57526 fix.go:54] fixHost starting: 
	I0213 19:36:42.915311   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:36:42.966888   57526 fix.go:102] recreateIfNeeded on newest-cni-886000: state=Stopped err=<nil>
	W0213 19:36:42.966945   57526 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 19:36:42.988717   57526 out.go:177] * Restarting existing docker container for "newest-cni-886000" ...
	I0213 19:36:43.063487   57526 cli_runner.go:164] Run: docker start newest-cni-886000
	I0213 19:36:43.313928   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:36:43.369867   57526 kic.go:430] container "newest-cni-886000" state is running.
	I0213 19:36:43.370470   57526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886000
	I0213 19:36:43.429919   57526 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/config.json ...
	I0213 19:36:43.430356   57526 machine.go:88] provisioning docker machine ...
	I0213 19:36:43.430384   57526 ubuntu.go:169] provisioning hostname "newest-cni-886000"
	I0213 19:36:43.430454   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:43.497018   57526 main.go:141] libmachine: Using SSH client type: native
	I0213 19:36:43.497553   57526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 58309 <nil> <nil>}
	I0213 19:36:43.497575   57526 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-886000 && echo "newest-cni-886000" | sudo tee /etc/hostname
	I0213 19:36:43.499207   57526 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 19:36:46.660397   57526 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-886000
	
	I0213 19:36:46.660525   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:46.714025   57526 main.go:141] libmachine: Using SSH client type: native
	I0213 19:36:46.714331   57526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 58309 <nil> <nil>}
	I0213 19:36:46.714345   57526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 19:36:46.853025   57526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:36:46.853045   57526 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
	I0213 19:36:46.853065   57526 ubuntu.go:177] setting up certificates
	I0213 19:36:46.853073   57526 provision.go:83] configureAuth start
	I0213 19:36:46.853153   57526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886000
	I0213 19:36:46.904368   57526 provision.go:138] copyHostCerts
	I0213 19:36:46.904463   57526 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
	I0213 19:36:46.904476   57526 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
	I0213 19:36:46.904619   57526 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
	I0213 19:36:46.904855   57526 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
	I0213 19:36:46.904862   57526 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
	I0213 19:36:46.904939   57526 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
	I0213 19:36:46.905096   57526 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
	I0213 19:36:46.905102   57526 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
	I0213 19:36:46.905168   57526 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
	I0213 19:36:46.905302   57526 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-886000]
	I0213 19:36:47.142747   57526 provision.go:172] copyRemoteCerts
	I0213 19:36:47.142817   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 19:36:47.142871   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:47.195156   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:36:47.300301   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 19:36:47.343547   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 19:36:47.385972   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 19:36:47.426745   57526 provision.go:86] duration metric: configureAuth took 573.625072ms
	I0213 19:36:47.426774   57526 ubuntu.go:193] setting minikube options for container-runtime
	I0213 19:36:47.427008   57526 config.go:182] Loaded profile config "newest-cni-886000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:36:47.427115   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:47.481705   57526 main.go:141] libmachine: Using SSH client type: native
	I0213 19:36:47.482024   57526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 58309 <nil> <nil>}
	I0213 19:36:47.482034   57526 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 19:36:47.620116   57526 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 19:36:47.620134   57526 ubuntu.go:71] root file system type: overlay
	I0213 19:36:47.620227   57526 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 19:36:47.620319   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:47.672536   57526 main.go:141] libmachine: Using SSH client type: native
	I0213 19:36:47.672832   57526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 58309 <nil> <nil>}
	I0213 19:36:47.672883   57526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 19:36:47.836307   57526 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 19:36:47.836426   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:47.889868   57526 main.go:141] libmachine: Using SSH client type: native
	I0213 19:36:47.890212   57526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 58309 <nil> <nil>}
	I0213 19:36:47.890225   57526 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 19:36:48.041932   57526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 19:36:48.041952   57526 machine.go:91] provisioned docker machine in 4.611540657s
	I0213 19:36:48.041963   57526 start.go:300] post-start starting for "newest-cni-886000" (driver="docker")
	I0213 19:36:48.041973   57526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 19:36:48.042044   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 19:36:48.042115   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:48.101376   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:36:48.211360   57526 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 19:36:48.215368   57526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 19:36:48.215391   57526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 19:36:48.215398   57526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 19:36:48.215404   57526 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 19:36:48.215412   57526 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
	I0213 19:36:48.215495   57526 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
	I0213 19:36:48.215656   57526 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
	I0213 19:36:48.215809   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 19:36:48.230347   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:36:48.269969   57526 start.go:303] post-start completed in 227.995485ms
	I0213 19:36:48.270124   57526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 19:36:48.270224   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:48.322803   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:36:48.415388   57526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 19:36:48.420326   57526 fix.go:56] fixHost completed within 5.505188402s
	I0213 19:36:48.420340   57526 start.go:83] releasing machines lock for "newest-cni-886000", held for 5.505225232s
	I0213 19:36:48.420420   57526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886000
	I0213 19:36:48.472150   57526 ssh_runner.go:195] Run: cat /version.json
	I0213 19:36:48.472169   57526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 19:36:48.472233   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:48.472253   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:48.529303   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:36:48.529303   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:36:48.734051   57526 ssh_runner.go:195] Run: systemctl --version
	I0213 19:36:48.738975   57526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 19:36:48.743954   57526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 19:36:48.774503   57526 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 19:36:48.774648   57526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 19:36:48.790090   57526 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 19:36:48.790120   57526 start.go:475] detecting cgroup driver to use...
	I0213 19:36:48.790136   57526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:36:48.790255   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:36:48.821760   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 19:36:48.841403   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 19:36:48.876247   57526 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 19:36:48.876325   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 19:36:48.892750   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:36:48.909188   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 19:36:48.925419   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 19:36:48.946324   57526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 19:36:48.963212   57526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 19:36:48.979683   57526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 19:36:48.996271   57526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 19:36:49.011029   57526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:36:49.075348   57526 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 19:36:49.165226   57526 start.go:475] detecting cgroup driver to use...
	I0213 19:36:49.165247   57526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 19:36:49.165328   57526 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 19:36:49.184243   57526 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 19:36:49.184328   57526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 19:36:49.205404   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 19:36:49.237393   57526 ssh_runner.go:195] Run: which cri-dockerd
	I0213 19:36:49.243020   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 19:36:49.264188   57526 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 19:36:49.331596   57526 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 19:36:49.426335   57526 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 19:36:49.511740   57526 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 19:36:49.511850   57526 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 19:36:49.541709   57526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:36:49.609596   57526 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 19:36:49.932301   57526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 19:36:49.951920   57526 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 19:36:49.978235   57526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:36:50.001024   57526 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 19:36:50.065660   57526 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 19:36:50.131729   57526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:36:50.192908   57526 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 19:36:50.222447   57526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 19:36:50.239716   57526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 19:36:50.305044   57526 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 19:36:50.402333   57526 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 19:36:50.402459   57526 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 19:36:50.407256   57526 start.go:543] Will wait 60s for crictl version
	I0213 19:36:50.407312   57526 ssh_runner.go:195] Run: which crictl
	I0213 19:36:50.411363   57526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 19:36:50.466657   57526 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 19:36:50.466745   57526 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:36:50.489992   57526 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 19:36:50.555512   57526 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0213 19:36:50.555671   57526 cli_runner.go:164] Run: docker exec -t newest-cni-886000 dig +short host.docker.internal
	I0213 19:36:50.668342   57526 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 19:36:50.668444   57526 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 19:36:50.673069   57526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:36:50.690387   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:50.766248   57526 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0213 19:36:50.787930   57526 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 19:36:50.788061   57526 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:36:50.809800   57526 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 19:36:50.809820   57526 docker.go:615] Images already preloaded, skipping extraction
	I0213 19:36:50.809908   57526 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 19:36:50.829840   57526 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 19:36:50.829875   57526 cache_images.go:84] Images are preloaded, skipping loading
	I0213 19:36:50.830024   57526 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 19:36:50.891602   57526 cni.go:84] Creating CNI manager for ""
	I0213 19:36:50.891622   57526 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:36:50.891637   57526 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0213 19:36:50.891652   57526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886000 NodeName:newest-cni-886000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 19:36:50.891765   57526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-886000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 19:36:50.891842   57526 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-886000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-886000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 19:36:50.891922   57526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 19:36:50.907522   57526 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 19:36:50.907595   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 19:36:50.924057   57526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0213 19:36:50.952991   57526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 19:36:50.982359   57526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0213 19:36:51.012203   57526 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 19:36:51.016424   57526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 19:36:51.033390   57526 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000 for IP: 192.168.67.2
	I0213 19:36:51.033413   57526 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:36:51.033580   57526 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
	I0213 19:36:51.033653   57526 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
	I0213 19:36:51.033738   57526 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/client.key
	I0213 19:36:51.033805   57526 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/apiserver.key.c7fa3a9e
	I0213 19:36:51.033871   57526 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/proxy-client.key
	I0213 19:36:51.034053   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
	W0213 19:36:51.034098   57526 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
	I0213 19:36:51.034108   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 19:36:51.034142   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
	I0213 19:36:51.034180   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
	I0213 19:36:51.034209   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
	I0213 19:36:51.034284   57526 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
	I0213 19:36:51.034904   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 19:36:51.075855   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 19:36:51.117090   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 19:36:51.157299   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/newest-cni-886000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 19:36:51.197452   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 19:36:51.238349   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 19:36:51.279298   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 19:36:51.320202   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 19:36:51.362489   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 19:36:51.403950   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
	I0213 19:36:51.444603   57526 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
	I0213 19:36:51.486143   57526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 19:36:51.515811   57526 ssh_runner.go:195] Run: openssl version
	I0213 19:36:51.521601   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 19:36:51.538352   57526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:36:51.544805   57526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:36:51.544876   57526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 19:36:51.551768   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 19:36:51.568252   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
	I0213 19:36:51.585938   57526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
	I0213 19:36:51.592086   57526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
	I0213 19:36:51.592160   57526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
	I0213 19:36:51.600676   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
	I0213 19:36:51.617917   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
	I0213 19:36:51.634048   57526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
	I0213 19:36:51.639023   57526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
	I0213 19:36:51.639083   57526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
	I0213 19:36:51.646356   57526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 19:36:51.662898   57526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 19:36:51.667701   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 19:36:51.674253   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 19:36:51.680843   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 19:36:51.687310   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 19:36:51.694049   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 19:36:51.700513   57526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 19:36:51.707192   57526 kubeadm.go:404] StartCluster: {Name:newest-cni-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-886000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 19:36:51.707301   57526 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:36:51.725923   57526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 19:36:51.740904   57526 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 19:36:51.740924   57526 kubeadm.go:636] restartCluster start
	I0213 19:36:51.740979   57526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 19:36:51.755413   57526 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:51.755508   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:36:51.808267   57526 kubeconfig.go:135] verify returned: extract IP: "newest-cni-886000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:36:51.808416   57526 kubeconfig.go:146] "newest-cni-886000" context is missing from /Users/jenkins/minikube-integration/18165-38421/kubeconfig - will repair!
	I0213 19:36:51.808728   57526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:36:51.810215   57526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 19:36:51.825407   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:51.825480   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:51.841715   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:52.325489   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:52.325581   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:52.346653   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:52.826134   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:52.826244   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:52.844307   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:53.325744   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:53.325847   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:53.342972   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:53.825550   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:53.825624   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:53.843330   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:54.325723   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:54.325827   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:54.342343   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:54.827619   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:54.827725   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:54.846120   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:55.326864   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:55.327043   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:55.345377   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:55.826674   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:55.826823   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:55.845394   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:56.325588   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:56.325700   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:56.343331   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:56.825523   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:56.825603   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:56.842268   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:57.326451   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:57.326533   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:57.343661   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:57.825622   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:57.825721   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:57.846206   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:58.327217   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:58.327430   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:58.345181   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:58.827179   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:58.827374   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:58.845661   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:59.325582   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:59.325684   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:59.343660   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:36:59.825911   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:36:59.826043   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:36:59.844452   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:00.325696   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:37:00.325814   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:37:00.342245   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:00.825632   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:37:00.825735   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:37:00.844902   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:01.325907   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:37:01.326047   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:37:01.343338   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:01.825740   57526 api_server.go:166] Checking apiserver status ...
	I0213 19:37:01.825817   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 19:37:01.842138   57526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:01.842160   57526 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 19:37:01.842178   57526 kubeadm.go:1135] stopping kube-system containers ...
	I0213 19:37:01.842249   57526 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 19:37:01.861807   57526 docker.go:483] Stopping containers: [87a7b1157461 9c870a328d22 0390f3303a93 aae6acb2be1e cde963d77931 d8c15b28917e b1c09251fa14 a2268a564491 a53d5fceaa35 9bc828ff19bb 6ab381c38225 62f26672cc45 d6413e94bd21 e31a85764e23 25065bf21e4e]
	I0213 19:37:01.861886   57526 ssh_runner.go:195] Run: docker stop 87a7b1157461 9c870a328d22 0390f3303a93 aae6acb2be1e cde963d77931 d8c15b28917e b1c09251fa14 a2268a564491 a53d5fceaa35 9bc828ff19bb 6ab381c38225 62f26672cc45 d6413e94bd21 e31a85764e23 25065bf21e4e
	I0213 19:37:01.880894   57526 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 19:37:01.898469   57526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 19:37:01.914242   57526 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5647 Feb 14 03:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 03:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 14 03:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 03:36 /etc/kubernetes/scheduler.conf
	
	I0213 19:37:01.914326   57526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 19:37:01.928889   57526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 19:37:01.943340   57526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 19:37:01.958627   57526 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:01.958695   57526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 19:37:01.973263   57526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 19:37:01.988110   57526 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 19:37:01.988179   57526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 19:37:02.002964   57526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 19:37:02.017579   57526 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 19:37:02.017600   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:02.069810   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:02.821006   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:02.956304   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:03.018098   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:03.117977   57526 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:37:03.118059   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:37:03.619932   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:37:04.119989   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:37:04.206872   57526 api_server.go:72] duration metric: took 1.088882002s to wait for apiserver process to appear ...
	I0213 19:37:04.206893   57526 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:37:04.206921   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:04.208772   57526 api_server.go:269] stopped: https://127.0.0.1:58313/healthz: Get "https://127.0.0.1:58313/healthz": EOF
	I0213 19:37:04.707963   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:06.904071   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 19:37:06.904108   57526 api_server.go:103] status: https://127.0.0.1:58313/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 19:37:06.904132   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:07.000036   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0213 19:37:07.000057   57526 api_server.go:103] status: https://127.0.0.1:58313/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0213 19:37:07.207056   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:07.214162   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:37:07.214212   57526 api_server.go:103] status: https://127.0.0.1:58313/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:37:07.707414   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:07.717876   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:37:07.717906   57526 api_server.go:103] status: https://127.0.0.1:58313/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:37:08.207095   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:08.216720   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 19:37:08.216749   57526 api_server.go:103] status: https://127.0.0.1:58313/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 19:37:08.708074   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:08.714289   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 200:
	ok
	I0213 19:37:08.720705   57526 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 19:37:08.720720   57526 api_server.go:131] duration metric: took 4.513774825s to wait for apiserver health ...
	I0213 19:37:08.720728   57526 cni.go:84] Creating CNI manager for ""
	I0213 19:37:08.720738   57526 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 19:37:08.743380   57526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 19:37:08.765296   57526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 19:37:08.782829   57526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 19:37:08.811780   57526 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:37:08.820541   57526 system_pods.go:59] 8 kube-system pods found
	I0213 19:37:08.820562   57526 system_pods.go:61] "coredns-76f75df574-g6pmv" [9fd1af2f-3728-44b2-8c66-a979924ff9eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 19:37:08.820579   57526 system_pods.go:61] "etcd-newest-cni-886000" [da45ceec-2faa-4a23-9fe6-3b253dc401fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:37:08.820587   57526 system_pods.go:61] "kube-apiserver-newest-cni-886000" [3a0f1e40-ef96-41ff-8692-fe9326558d1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:37:08.820593   57526 system_pods.go:61] "kube-controller-manager-newest-cni-886000" [395bb8ab-db2a-44e4-885a-cba5fca1fffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:37:08.820599   57526 system_pods.go:61] "kube-proxy-br5cb" [571fa0f0-c7b5-45f3-a7c6-2146c1223edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 19:37:08.820607   57526 system_pods.go:61] "kube-scheduler-newest-cni-886000" [a337d915-e78a-4856-a68f-cbf7dccf0534] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:37:08.820613   57526 system_pods.go:61] "metrics-server-57f55c9bc5-xkhzw" [5b97b92c-3fb7-4a52-9ec4-4f5d5ea9f92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:37:08.820618   57526 system_pods.go:61] "storage-provisioner" [d89e4c64-f087-40bf-b195-8b32fce52b64] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 19:37:08.820623   57526 system_pods.go:74] duration metric: took 8.830609ms to wait for pod list to return data ...
	I0213 19:37:08.820629   57526 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:37:08.823979   57526 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:37:08.823993   57526 node_conditions.go:123] node cpu capacity is 12
	I0213 19:37:08.824004   57526 node_conditions.go:105] duration metric: took 3.372261ms to run NodePressure ...
	I0213 19:37:08.824017   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 19:37:09.132487   57526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 19:37:09.143478   57526 ops.go:34] apiserver oom_adj: -16
	I0213 19:37:09.143495   57526 kubeadm.go:640] restartCluster took 17.402387672s
	I0213 19:37:09.143503   57526 kubeadm.go:406] StartCluster complete in 17.436144863s
	I0213 19:37:09.143515   57526 settings.go:142] acquiring lock: {Name:mke46562c9f92468d93bd6cd756238f74ba38936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:37:09.143597   57526 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 19:37:09.144289   57526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/kubeconfig: {Name:mk18bf84f3ce48ab7f0238c5bd9b6dfe6fbb866a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 19:37:09.144564   57526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 19:37:09.144606   57526 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 19:37:09.144709   57526 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-886000"
	I0213 19:37:09.144736   57526 addons.go:69] Setting default-storageclass=true in profile "newest-cni-886000"
	I0213 19:37:09.144754   57526 addons.go:69] Setting dashboard=true in profile "newest-cni-886000"
	I0213 19:37:09.144759   57526 config.go:182] Loaded profile config "newest-cni-886000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 19:37:09.144770   57526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886000"
	I0213 19:37:09.144771   57526 addons.go:69] Setting metrics-server=true in profile "newest-cni-886000"
	I0213 19:37:09.144742   57526 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-886000"
	W0213 19:37:09.144789   57526 addons.go:243] addon storage-provisioner should already be in state true
	I0213 19:37:09.144863   57526 host.go:66] Checking if "newest-cni-886000" exists ...
	I0213 19:37:09.144776   57526 addons.go:234] Setting addon dashboard=true in "newest-cni-886000"
	W0213 19:37:09.144881   57526 addons.go:243] addon dashboard should already be in state true
	I0213 19:37:09.144789   57526 addons.go:234] Setting addon metrics-server=true in "newest-cni-886000"
	I0213 19:37:09.144941   57526 host.go:66] Checking if "newest-cni-886000" exists ...
	W0213 19:37:09.144985   57526 addons.go:243] addon metrics-server should already be in state true
	I0213 19:37:09.145063   57526 host.go:66] Checking if "newest-cni-886000" exists ...
	I0213 19:37:09.145182   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:37:09.145280   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:37:09.145338   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:37:09.146226   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:37:09.150817   57526 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-886000" context rescaled to 1 replicas
	I0213 19:37:09.151031   57526 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 19:37:09.173490   57526 out.go:177] * Verifying Kubernetes components...
	I0213 19:37:09.215742   57526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 19:37:09.248496   57526 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0213 19:37:09.248538   57526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 19:37:09.227864   57526 addons.go:234] Setting addon default-storageclass=true in "newest-cni-886000"
	I0213 19:37:09.269636   57526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0213 19:37:09.269651   57526 addons.go:243] addon default-storageclass should already be in state true
	I0213 19:37:09.281500   57526 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 19:37:09.281555   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:37:09.290354   57526 host.go:66] Checking if "newest-cni-886000" exists ...
	I0213 19:37:09.290363   57526 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 19:37:09.311604   57526 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0213 19:37:09.332614   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 19:37:09.332743   57526 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:37:09.353435   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0213 19:37:09.353448   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0213 19:37:09.333088   57526 cli_runner.go:164] Run: docker container inspect newest-cni-886000 --format={{.State.Status}}
	I0213 19:37:09.353451   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 19:37:09.353481   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:37:09.353540   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:37:09.353545   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:37:09.386527   57526 api_server.go:52] waiting for apiserver process to appear ...
	I0213 19:37:09.386655   57526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 19:37:09.416110   57526 api_server.go:72] duration metric: took 264.910951ms to wait for apiserver process to appear ...
	I0213 19:37:09.416147   57526 api_server.go:88] waiting for apiserver healthz status ...
	I0213 19:37:09.416187   57526 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58313/healthz ...
	I0213 19:37:09.424200   57526 api_server.go:279] https://127.0.0.1:58313/healthz returned 200:
	ok
	I0213 19:37:09.426363   57526 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 19:37:09.426379   57526 api_server.go:131] duration metric: took 10.225035ms to wait for apiserver health ...
	I0213 19:37:09.426385   57526 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 19:37:09.435772   57526 system_pods.go:59] 8 kube-system pods found
	I0213 19:37:09.435834   57526 system_pods.go:61] "coredns-76f75df574-g6pmv" [9fd1af2f-3728-44b2-8c66-a979924ff9eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 19:37:09.435867   57526 system_pods.go:61] "etcd-newest-cni-886000" [da45ceec-2faa-4a23-9fe6-3b253dc401fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 19:37:09.435895   57526 system_pods.go:61] "kube-apiserver-newest-cni-886000" [3a0f1e40-ef96-41ff-8692-fe9326558d1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 19:37:09.435924   57526 system_pods.go:61] "kube-controller-manager-newest-cni-886000" [395bb8ab-db2a-44e4-885a-cba5fca1fffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 19:37:09.435936   57526 system_pods.go:61] "kube-proxy-br5cb" [571fa0f0-c7b5-45f3-a7c6-2146c1223edd] Running
	I0213 19:37:09.435947   57526 system_pods.go:61] "kube-scheduler-newest-cni-886000" [a337d915-e78a-4856-a68f-cbf7dccf0534] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 19:37:09.435970   57526 system_pods.go:61] "metrics-server-57f55c9bc5-xkhzw" [5b97b92c-3fb7-4a52-9ec4-4f5d5ea9f92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 19:37:09.435988   57526 system_pods.go:61] "storage-provisioner" [d89e4c64-f087-40bf-b195-8b32fce52b64] Running
	I0213 19:37:09.435999   57526 system_pods.go:74] duration metric: took 9.607965ms to wait for pod list to return data ...
	I0213 19:37:09.436006   57526 default_sa.go:34] waiting for default service account to be created ...
	I0213 19:37:09.442127   57526 default_sa.go:45] found service account: "default"
	I0213 19:37:09.442154   57526 default_sa.go:55] duration metric: took 6.139072ms for default service account to be created ...
	I0213 19:37:09.442170   57526 kubeadm.go:581] duration metric: took 290.976239ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0213 19:37:09.442195   57526 node_conditions.go:102] verifying NodePressure condition ...
	I0213 19:37:09.446806   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:37:09.446903   57526 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 19:37:09.446920   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 19:37:09.447053   57526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886000
	I0213 19:37:09.448763   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:37:09.448908   57526 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 19:37:09.448940   57526 node_conditions.go:123] node cpu capacity is 12
	I0213 19:37:09.448955   57526 node_conditions.go:105] duration metric: took 6.749451ms to run NodePressure ...
	I0213 19:37:09.448977   57526 start.go:228] waiting for startup goroutines ...
	I0213 19:37:09.450759   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:37:09.513508   57526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58309 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/newest-cni-886000/id_rsa Username:docker}
	I0213 19:37:09.581224   57526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 19:37:09.581234   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0213 19:37:09.581236   57526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 19:37:09.581242   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0213 19:37:09.581249   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 19:37:09.617139   57526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 19:37:09.617158   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 19:37:09.617561   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0213 19:37:09.617573   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0213 19:37:09.641783   57526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 19:37:09.656789   57526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 19:37:09.656807   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 19:37:09.657676   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0213 19:37:09.657693   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0213 19:37:09.803444   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0213 19:37:09.803458   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0213 19:37:09.807127   57526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 19:37:09.909677   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0213 19:37:09.909693   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0213 19:37:10.029242   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0213 19:37:10.029266   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0213 19:37:10.206037   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0213 19:37:10.206067   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0213 19:37:10.243547   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0213 19:37:10.243563   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0213 19:37:10.326118   57526 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 19:37:10.326138   57526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0213 19:37:10.411691   57526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 19:37:11.031558   57526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.450292581s)
	I0213 19:37:11.031565   57526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.389745335s)
	I0213 19:37:11.209507   57526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.402344188s)
	I0213 19:37:11.209533   57526 addons.go:470] Verifying addon metrics-server=true in "newest-cni-886000"
	I0213 19:37:11.482257   57526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.070518114s)
	I0213 19:37:11.505394   57526 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-886000 addons enable metrics-server
	
	I0213 19:37:11.525360   57526 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0213 19:37:11.546244   57526 addons.go:505] enable addons completed in 2.401619622s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0213 19:37:11.546269   57526 start.go:233] waiting for cluster config update ...
	I0213 19:37:11.546292   57526 start.go:242] writing updated cluster config ...
	I0213 19:37:11.584621   57526 ssh_runner.go:195] Run: rm -f paused
	I0213 19:37:11.629181   57526 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 19:37:11.650491   57526 out.go:177] * Done! kubectl is now configured to use "newest-cni-886000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.038417857Z" level=info msg="Loading containers: start."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.126201672Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.165065133Z" level=info msg="Loading containers: done."
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172861120Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.172921650Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192080634Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:14 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 03:18:14 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:14.192148976Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.560647039Z" level=info msg="Processing signal 'terminated'"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.561661074Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[712]: time="2024-02-14T03:18:22.562254523Z" level=info msg="Daemon shutdown complete"
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 03:18:22 old-k8s-version-187000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.624136291Z" level=info msg="Starting up"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.631799867Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.875735799Z" level=info msg="Loading containers: start."
	Feb 14 03:18:22 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:22.970706234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.007890871Z" level=info msg="Loading containers: done."
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015528496Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.015592390Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033874246Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:18:23 old-k8s-version-187000 dockerd[936]: time="2024-02-14T03:18:23.033915504Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:18:23 old-k8s-version-187000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-14T03:42:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:42:02 up  2:21,  0 users,  load average: 2.52, 3.54, 4.38
	Linux old-k8s-version-187000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 03:42:00 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:42:01 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1330.
	Feb 14 03:42:01 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:42:01 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: I0214 03:42:01.639870   39562 server.go:410] Version: v1.16.0
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: I0214 03:42:01.640045   39562 plugins.go:100] No cloud provider specified.
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: I0214 03:42:01.640053   39562 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: I0214 03:42:01.641733   39562 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: W0214 03:42:01.642665   39562 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: W0214 03:42:01.642740   39562 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:42:01 old-k8s-version-187000 kubelet[39562]: F0214 03:42:01.642773   39562 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:42:01 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:42:01 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 03:42:02 old-k8s-version-187000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1331.
	Feb 14 03:42:02 old-k8s-version-187000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 03:42:02 old-k8s-version-187000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: I0214 03:42:02.369330   39660 server.go:410] Version: v1.16.0
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: I0214 03:42:02.369541   39660 plugins.go:100] No cloud provider specified.
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: I0214 03:42:02.369550   39660 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: I0214 03:42:02.371329   39660 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: W0214 03:42:02.380548   39660 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: W0214 03:42:02.380945   39660 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 03:42:02 old-k8s-version-187000 kubelet[39660]: F0214 03:42:02.380977   39660 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 03:42:02 old-k8s-version-187000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 03:42:02 old-k8s-version-187000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000
E0213 19:42:02.990332   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/default-k8s-diff-port-069000/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 2 (406.651022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-187000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (383.76s)

                                                
                                    

Test pass (300/333)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.98
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.34
9 TestDownloadOnly/v1.16.0/DeleteAll 0.66
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.28.4/json-events 25.51
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.32
18 TestDownloadOnly/v1.28.4/DeleteAll 0.69
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.39
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.06
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.31
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.66
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.38
29 TestDownloadOnlyKic 2
30 TestBinaryMirror 1.72
31 TestOffline 44.01
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 340.5
40 TestAddons/parallel/InspektorGadget 10.98
41 TestAddons/parallel/MetricsServer 6.87
42 TestAddons/parallel/HelmTiller 11.35
44 TestAddons/parallel/CSI 64.62
45 TestAddons/parallel/Headlamp 13.81
46 TestAddons/parallel/CloudSpanner 5.36
47 TestAddons/parallel/LocalPath 55.12
48 TestAddons/parallel/NvidiaDevicePlugin 5.73
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 11.8
54 TestCertOptions 27.21
55 TestCertExpiration 232.91
56 TestDockerFlags 27.09
57 TestForceSystemdFlag 26.96
58 TestForceSystemdEnv 24.92
61 TestHyperKitDriverInstallOrUpdate 7.74
64 TestErrorSpam/setup 22.49
65 TestErrorSpam/start 2.15
66 TestErrorSpam/status 1.31
67 TestErrorSpam/pause 1.79
68 TestErrorSpam/unpause 1.98
69 TestErrorSpam/stop 11.49
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 38.65
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 38.52
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 10.08
81 TestFunctional/serial/CacheCmd/cache/add_local 1.85
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.43
86 TestFunctional/serial/CacheCmd/cache/delete 0.17
87 TestFunctional/serial/MinikubeKubectlCmd 1.26
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.69
89 TestFunctional/serial/ExtraConfig 40.33
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.27
92 TestFunctional/serial/LogsFileCmd 3.27
93 TestFunctional/serial/InvalidService 4.66
95 TestFunctional/parallel/ConfigCmd 0.53
96 TestFunctional/parallel/DashboardCmd 20.83
97 TestFunctional/parallel/DryRun 1.78
98 TestFunctional/parallel/InternationalLanguage 0.84
99 TestFunctional/parallel/StatusCmd 1.37
104 TestFunctional/parallel/AddonsCmd 0.27
105 TestFunctional/parallel/PersistentVolumeClaim 40.41
107 TestFunctional/parallel/SSHCmd 0.8
108 TestFunctional/parallel/CpCmd 2.74
109 TestFunctional/parallel/MySQL 210.59
110 TestFunctional/parallel/FileSync 0.48
111 TestFunctional/parallel/CertSync 2.76
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
119 TestFunctional/parallel/License 1.37
120 TestFunctional/parallel/Version/short 0.11
121 TestFunctional/parallel/Version/components 0.91
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.38
127 TestFunctional/parallel/ImageCommands/Setup 5.56
128 TestFunctional/parallel/DockerEnv/bash 2.01
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.34
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.02
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.14
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.07
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.3
139 TestFunctional/parallel/ServiceCmd/DeployApp 61.13
140 TestFunctional/parallel/ServiceCmd/List 0.47
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
142 TestFunctional/parallel/ServiceCmd/HTTPS 15
143 TestFunctional/parallel/ServiceCmd/Format 15
144 TestFunctional/parallel/ServiceCmd/URL 15
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 36.16
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
157 TestFunctional/parallel/ProfileCmd/profile_list 0.51
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
159 TestFunctional/parallel/MountCmd/any-port 12.44
160 TestFunctional/parallel/MountCmd/specific-port 2.54
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.97
162 TestFunctional/delete_addon-resizer_images 0.14
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestImageBuild/serial/Setup 21.64
169 TestImageBuild/serial/NormalBuild 4.32
170 TestImageBuild/serial/BuildWithBuildArg 1.14
171 TestImageBuild/serial/BuildWithDockerIgnore 0.96
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.04
182 TestJSONOutput/start/Command 75.46
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.57
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.84
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.77
207 TestKicCustomNetwork/create_custom_network 24.73
208 TestKicCustomNetwork/use_default_bridge_network 24.06
209 TestKicExistingNetwork 24.65
210 TestKicCustomSubnet 24.21
211 TestKicStaticIP 24.8
212 TestMainNoArgs 0.08
213 TestMinikubeProfile 51.09
216 TestMountStart/serial/StartWithMountFirst 7.93
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 7.88
219 TestMountStart/serial/VerifyMountSecond 0.38
220 TestMountStart/serial/DeleteFirst 2.08
221 TestMountStart/serial/VerifyMountPostDelete 0.39
222 TestMountStart/serial/Stop 1.56
223 TestMountStart/serial/RestartStopped 6.05
224 TestMountStart/serial/VerifyMountPostStop 0.39
227 TestMultiNode/serial/FreshStart2Nodes 64.95
228 TestMultiNode/serial/DeployApp2Nodes 45.53
229 TestMultiNode/serial/PingHostFrom2Pods 0.95
230 TestMultiNode/serial/AddNode 15.4
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.47
233 TestMultiNode/serial/CopyFile 14.78
234 TestMultiNode/serial/StopNode 3.08
235 TestMultiNode/serial/StartAfterStop 14.27
236 TestMultiNode/serial/RestartKeepsNodes 102.12
237 TestMultiNode/serial/DeleteNode 6.05
238 TestMultiNode/serial/StopMultiNode 21.8
239 TestMultiNode/serial/RestartMultiNode 82.59
240 TestMultiNode/serial/ValidateNameConflict 26.54
244 TestPreload 157.09
246 TestScheduledStopUnix 96
249 TestInsufficientStorage 10.73
250 TestRunningBinaryUpgrade 87.42
253 TestMissingContainerUpgrade 202.67
265 TestStoppedBinaryUpgrade/Setup 4.54
266 TestStoppedBinaryUpgrade/Upgrade 73.97
267 TestStoppedBinaryUpgrade/MinikubeLogs 2.95
276 TestPause/serial/Start 39.07
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.53
279 TestNoKubernetes/serial/StartWithK8s 24.84
280 TestNoKubernetes/serial/StartWithStopK8s 8.91
281 TestNoKubernetes/serial/Start 7.82
282 TestPause/serial/SecondStartNoReconfiguration 40.7
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
284 TestNoKubernetes/serial/ProfileList 14.68
285 TestNoKubernetes/serial/Stop 1.57
286 TestNoKubernetes/serial/StartNoArgs 8.06
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
288 TestPause/serial/Pause 0.67
289 TestPause/serial/VerifyStatus 0.45
290 TestPause/serial/Unpause 0.67
291 TestPause/serial/PauseAgain 0.89
292 TestPause/serial/DeletePaused 2.58
293 TestPause/serial/VerifyDeletedResources 0.59
294 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 21.45
295 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 24.64
296 TestNetworkPlugins/group/auto/Start 35.92
297 TestNetworkPlugins/group/auto/KubeletFlags 0.4
298 TestNetworkPlugins/group/auto/NetCatPod 14.2
299 TestNetworkPlugins/group/auto/DNS 0.13
300 TestNetworkPlugins/group/auto/Localhost 0.12
301 TestNetworkPlugins/group/auto/HairPin 0.12
302 TestNetworkPlugins/group/calico/Start 65.89
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.39
305 TestNetworkPlugins/group/calico/NetCatPod 15.21
306 TestNetworkPlugins/group/calico/DNS 0.15
307 TestNetworkPlugins/group/calico/Localhost 0.13
308 TestNetworkPlugins/group/calico/HairPin 0.14
309 TestNetworkPlugins/group/custom-flannel/Start 54.93
310 TestNetworkPlugins/group/false/Start 40.21
311 TestNetworkPlugins/group/false/KubeletFlags 0.41
312 TestNetworkPlugins/group/false/NetCatPod 14.19
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.17
315 TestNetworkPlugins/group/false/DNS 0.14
316 TestNetworkPlugins/group/false/Localhost 0.12
317 TestNetworkPlugins/group/false/HairPin 0.12
318 TestNetworkPlugins/group/custom-flannel/DNS 0.14
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
321 TestNetworkPlugins/group/kindnet/Start 52.51
322 TestNetworkPlugins/group/flannel/Start 53.37
323 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
326 TestNetworkPlugins/group/kindnet/NetCatPod 13.19
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
328 TestNetworkPlugins/group/flannel/NetCatPod 14.2
329 TestNetworkPlugins/group/kindnet/DNS 0.14
330 TestNetworkPlugins/group/kindnet/Localhost 0.12
331 TestNetworkPlugins/group/kindnet/HairPin 0.12
332 TestNetworkPlugins/group/flannel/DNS 0.13
333 TestNetworkPlugins/group/flannel/Localhost 0.12
334 TestNetworkPlugins/group/flannel/HairPin 0.12
335 TestNetworkPlugins/group/enable-default-cni/Start 78.49
336 TestNetworkPlugins/group/bridge/Start 37.59
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
338 TestNetworkPlugins/group/bridge/NetCatPod 13.21
339 TestNetworkPlugins/group/bridge/DNS 0.14
340 TestNetworkPlugins/group/bridge/Localhost 0.16
341 TestNetworkPlugins/group/bridge/HairPin 0.12
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.24
344 TestNetworkPlugins/group/kubenet/Start 38.2
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
350 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
351 TestNetworkPlugins/group/kubenet/NetCatPod 14.33
352 TestNetworkPlugins/group/kubenet/DNS 0.22
353 TestNetworkPlugins/group/kubenet/Localhost 0.13
354 TestNetworkPlugins/group/kubenet/HairPin 0.13
356 TestStartStop/group/no-preload/serial/FirstStart 101.06
357 TestStartStop/group/no-preload/serial/DeployApp 13.25
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
359 TestStartStop/group/no-preload/serial/Stop 10.9
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
361 TestStartStop/group/no-preload/serial/SecondStart 335.48
364 TestStartStop/group/old-k8s-version/serial/Stop 1.55
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.44
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
370 TestStartStop/group/no-preload/serial/Pause 3.57
372 TestStartStop/group/embed-certs/serial/FirstStart 76.36
373 TestStartStop/group/embed-certs/serial/DeployApp 13.26
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
375 TestStartStop/group/embed-certs/serial/Stop 11.08
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.45
377 TestStartStop/group/embed-certs/serial/SecondStart 330.96
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
382 TestStartStop/group/embed-certs/serial/Pause 3.39
384 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.53
385 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.25
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
387 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.45
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.34
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.01
392 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.43
396 TestStartStop/group/newest-cni/serial/FirstStart 35.39
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
399 TestStartStop/group/newest-cni/serial/Stop 10.91
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
401 TestStartStop/group/newest-cni/serial/SecondStart 30.11
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
405 TestStartStop/group/newest-cni/serial/Pause 3.46
x
+
TestDownloadOnly/v1.16.0/json-events (23.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-034000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-034000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (23.974715287s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-034000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-034000: exit status 85 (340.474777ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-034000 | jenkins | v1.32.0 | 13 Feb 24 18:07 PST |          |
	|         | -p download-only-034000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 18:07:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 18:07:41.148301   38901 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:07:41.148570   38901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:07:41.148576   38901 out.go:304] Setting ErrFile to fd 2...
	I0213 18:07:41.148580   38901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:07:41.148758   38901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	W0213 18:07:41.148860   38901 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18165-38421/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18165-38421/.minikube/config/config.json: no such file or directory
	I0213 18:07:41.150851   38901 out.go:298] Setting JSON to true
	I0213 18:07:41.178461   38901 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":13320,"bootTime":1707863141,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:07:41.178578   38901 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:07:41.200415   38901 out.go:97] [download-only-034000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:07:41.221963   38901 out.go:169] MINIKUBE_LOCATION=18165
	I0213 18:07:41.200626   38901 notify.go:220] Checking for updates...
	W0213 18:07:41.200634   38901 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 18:07:41.265419   38901 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:07:41.287056   38901 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:07:41.308184   38901 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:07:41.349823   38901 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	W0213 18:07:41.393048   38901 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 18:07:41.393734   38901 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:07:41.451600   38901 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:07:41.451757   38901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:07:41.567949   38901 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:07:41.556695978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:07:41.589015   38901 out.go:97] Using the docker driver based on user configuration
	I0213 18:07:41.589062   38901 start.go:298] selected driver: docker
	I0213 18:07:41.589077   38901 start.go:902] validating driver "docker" against <nil>
	I0213 18:07:41.589309   38901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:07:41.701045   38901 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:07:41.690545645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:07:41.701209   38901 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:07:41.706018   38901 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 18:07:41.706451   38901 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 18:07:41.729168   38901 out.go:169] Using Docker Desktop driver with root privileges
	I0213 18:07:41.750281   38901 cni.go:84] Creating CNI manager for ""
	I0213 18:07:41.750310   38901 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 18:07:41.750328   38901 start_flags.go:321] config:
	{Name:download-only-034000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-034000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:07:41.772134   38901 out.go:97] Starting control plane node download-only-034000 in cluster download-only-034000
	I0213 18:07:41.772178   38901 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:07:41.794000   38901 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:07:41.794038   38901 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:07:41.794064   38901 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:07:41.845787   38901 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 18:07:41.846226   38901 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 18:07:41.846360   38901 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 18:07:42.073494   38901 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 18:07:42.073544   38901 cache.go:56] Caching tarball of preloaded images
	I0213 18:07:42.074329   38901 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:07:42.096163   38901 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 18:07:42.096191   38901 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:07:42.642497   38901 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 18:07:57.685669   38901 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:07:57.685840   38901 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:07:58.246034   38901 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 18:07:58.246271   38901 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/download-only-034000/config.json ...
	I0213 18:07:58.246298   38901 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/download-only-034000/config.json: {Name:mk9675d9b31d690afa04db7302d6cd4128faaccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:07:58.246961   38901 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 18:07:58.247724   38901 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-034000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (25.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-613000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-613000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (25.504864523s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (25.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-613000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-613000: exit status 85 (324.317655ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-034000 | jenkins | v1.32.0 | 13 Feb 24 18:07 PST |                     |
	|         | -p download-only-034000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| delete  | -p download-only-034000        | download-only-034000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| start   | -o=json --download-only        | download-only-613000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST |                     |
	|         | -p download-only-613000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 18:08:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 18:08:06.511074   38989 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:08:06.511271   38989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:08:06.511276   38989 out.go:304] Setting ErrFile to fd 2...
	I0213 18:08:06.511280   38989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:08:06.511457   38989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:08:06.513037   38989 out.go:298] Setting JSON to true
	I0213 18:08:06.537037   38989 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":13345,"bootTime":1707863141,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:08:06.537147   38989 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:08:06.557984   38989 out.go:97] [download-only-613000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:08:06.579043   38989 out.go:169] MINIKUBE_LOCATION=18165
	I0213 18:08:06.558240   38989 notify.go:220] Checking for updates...
	I0213 18:08:06.621826   38989 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:08:06.643002   38989 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:08:06.664100   38989 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:08:06.685169   38989 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	W0213 18:08:06.727840   38989 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 18:08:06.728127   38989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:08:06.786514   38989 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:08:06.786728   38989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:08:06.894718   38989 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:08:06.883696478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:08:06.915963   38989 out.go:97] Using the docker driver based on user configuration
	I0213 18:08:06.916010   38989 start.go:298] selected driver: docker
	I0213 18:08:06.916025   38989 start.go:902] validating driver "docker" against <nil>
	I0213 18:08:06.916243   38989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:08:07.026661   38989 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:08:07.016850595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:08:07.026842   38989 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:08:07.029809   38989 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 18:08:07.029949   38989 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 18:08:07.052048   38989 out.go:169] Using Docker Desktop driver with root privileges
	I0213 18:08:07.073186   38989 cni.go:84] Creating CNI manager for ""
	I0213 18:08:07.073222   38989 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 18:08:07.073251   38989 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 18:08:07.073270   38989 start_flags.go:321] config:
	{Name:download-only-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-613000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:08:07.094855   38989 out.go:97] Starting control plane node download-only-613000 in cluster download-only-613000
	I0213 18:08:07.094932   38989 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:08:07.117132   38989 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:08:07.117186   38989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:08:07.117250   38989 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:08:07.170250   38989 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 18:08:07.170452   38989 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 18:08:07.170476   38989 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 18:08:07.170482   38989 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 18:08:07.170491   38989 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 18:08:07.395174   38989 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 18:08:07.395198   38989 cache.go:56] Caching tarball of preloaded images
	I0213 18:08:07.395599   38989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:08:07.417090   38989 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0213 18:08:07.417101   38989 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:08:07.987853   38989 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 18:08:25.137454   38989 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:08:25.137640   38989 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:08:25.772669   38989 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 18:08:25.772918   38989 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/download-only-613000/config.json ...
	I0213 18:08:25.772943   38989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/download-only-613000/config.json: {Name:mk8c7aa9364ca1f5ce9bce82bd05779aeb97953c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 18:08:25.773449   38989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 18:08:25.773890   38989 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-613000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-613000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-531000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-531000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (20.060475437s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-531000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-531000: exit status 85 (313.941996ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-034000 | jenkins | v1.32.0 | 13 Feb 24 18:07 PST |                     |
	|         | -p download-only-034000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| delete  | -p download-only-034000           | download-only-034000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| start   | -o=json --download-only           | download-only-613000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST |                     |
	|         | -p download-only-613000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| delete  | -p download-only-613000           | download-only-613000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST | 13 Feb 24 18:08 PST |
	| start   | -o=json --download-only           | download-only-531000 | jenkins | v1.32.0 | 13 Feb 24 18:08 PST |                     |
	|         | -p download-only-531000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 18:08:33
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 18:08:33.425677   39078 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:08:33.425943   39078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:08:33.425949   39078 out.go:304] Setting ErrFile to fd 2...
	I0213 18:08:33.425953   39078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:08:33.426135   39078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:08:33.427744   39078 out.go:298] Setting JSON to true
	I0213 18:08:33.455950   39078 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":13372,"bootTime":1707863141,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:08:33.456111   39078 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:08:33.477123   39078 out.go:97] [download-only-531000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:08:33.499269   39078 out.go:169] MINIKUBE_LOCATION=18165
	I0213 18:08:33.477307   39078 notify.go:220] Checking for updates...
	I0213 18:08:33.542158   39078 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:08:33.565215   39078 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:08:33.586920   39078 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:08:33.608357   39078 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	W0213 18:08:33.650950   39078 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 18:08:33.651285   39078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:08:33.711150   39078 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:08:33.711297   39078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:08:34.050894   39078 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:08:34.039073736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:08:34.072065   39078 out.go:97] Using the docker driver based on user configuration
	I0213 18:08:34.072105   39078 start.go:298] selected driver: docker
	I0213 18:08:34.072119   39078 start.go:902] validating driver "docker" against <nil>
	I0213 18:08:34.072312   39078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:08:34.184645   39078 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-14 02:08:34.174160438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:08:34.184824   39078 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 18:08:34.188417   39078 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 18:08:34.189061   39078 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 18:08:34.210096   39078 out.go:169] Using Docker Desktop driver with root privileges
	I0213 18:08:34.230988   39078 cni.go:84] Creating CNI manager for ""
	I0213 18:08:34.231030   39078 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 18:08:34.231051   39078 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 18:08:34.231070   39078 start_flags.go:321] config:
	{Name:download-only-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-531000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:08:34.253058   39078 out.go:97] Starting control plane node download-only-531000 in cluster download-only-531000
	I0213 18:08:34.253099   39078 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 18:08:34.275040   39078 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 18:08:34.275071   39078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 18:08:34.275123   39078 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 18:08:34.327019   39078 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 18:08:34.327563   39078 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 18:08:34.327605   39078 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 18:08:34.327612   39078 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 18:08:34.327621   39078 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 18:08:34.526622   39078 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 18:08:34.526670   39078 cache.go:56] Caching tarball of preloaded images
	I0213 18:08:34.527357   39078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 18:08:34.549163   39078 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 18:08:34.549190   39078 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0213 18:08:35.091767   39078 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-531000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-531000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-597000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-597000
--- PASS: TestDownloadOnlyKic (2.00s)

                                                
                                    
x
+
TestBinaryMirror (1.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-013000 --alsologtostderr --binary-mirror http://127.0.0.1:52581 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-013000 --alsologtostderr --binary-mirror http://127.0.0.1:52581 --driver=docker : (1.092889833s)
helpers_test.go:175: Cleaning up "binary-mirror-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-013000
--- PASS: TestBinaryMirror (1.72s)

                                                
                                    
x
+
TestOffline (44.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-855000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-855000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (41.534016629s)
helpers_test.go:175: Cleaning up "offline-docker-855000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-855000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-855000: (2.478933697s)
--- PASS: TestOffline (44.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-444000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-444000: exit status 85 (213.435126ms)

                                                
                                                
-- stdout --
	* Profile "addons-444000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-444000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-444000: exit status 85 (193.00461ms)

                                                
                                                
-- stdout --
	* Profile "addons-444000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (340.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-444000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-444000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m40.495297416s)
--- PASS: TestAddons/Setup (340.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-44sj2" [b96c93ca-7149-443d-b4e0-7da5a55b8148] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004223938s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-444000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-444000: (5.977600916s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.083612ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-6hbl9" [eee525c8-fa6b-47c4-9ab6-3f4b3b33193c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0062035s
addons_test.go:415: (dbg) Run:  kubectl --context addons-444000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.826492ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-4hj2d" [8521deff-fd79-4e3d-94a2-f00836c60a0d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.023564816s
addons_test.go:473: (dbg) Run:  kubectl --context addons-444000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-444000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.56707633s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.445247ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-444000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-444000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [829ca571-9e00-46e0-b7d1-8335048445af] Pending
helpers_test.go:344: "task-pv-pod" [829ca571-9e00-46e0-b7d1-8335048445af] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [829ca571-9e00-46e0-b7d1-8335048445af] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00350423s
addons_test.go:584: (dbg) Run:  kubectl --context addons-444000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-444000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-444000 delete pod task-pv-pod: (1.202308916s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-444000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-444000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-444000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [10a5295d-ab69-4c1e-a898-e1c1ac0d0ce9] Pending
helpers_test.go:344: "task-pv-pod-restore" [10a5295d-ab69-4c1e-a898-e1c1ac0d0ce9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [10a5295d-ab69-4c1e-a898-e1c1ac0d0ce9] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004733154s
addons_test.go:626: (dbg) Run:  kubectl --context addons-444000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-444000 delete pod task-pv-pod-restore: (1.04887112s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-444000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-444000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-444000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.175630107s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-444000 addons disable volumesnapshots --alsologtostderr -v=1: (1.030286939s)
--- PASS: TestAddons/parallel/CSI (64.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-444000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-444000 --alsologtostderr -v=1: (1.802128441s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-dckms" [1f5ff23e-2ce6-4e0e-89a3-7e851140b87a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-dckms" [1f5ff23e-2ce6-4e0e-89a3-7e851140b87a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004377702s
--- PASS: TestAddons/parallel/Headlamp (13.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7b4754d5d4-8nz9t" [562f3420-316c-49f5-b147-2e1c66675ea0] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005013819s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-444000
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-444000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-444000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [22a1c2d4-869a-4305-9a46-8cdfd6929815] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [22a1c2d4-869a-4305-9a46-8cdfd6929815] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [22a1c2d4-869a-4305-9a46-8cdfd6929815] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004652151s
addons_test.go:891: (dbg) Run:  kubectl --context addons-444000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 ssh "cat /opt/local-path-provisioner/pvc-bdfc4cc0-56b7-43cb-9093-4c1e18b93fa0_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-444000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-444000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-444000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.18054019s)
--- PASS: TestAddons/parallel/LocalPath (55.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l9fnr" [2c5b4ae0-7caa-4ff0-bfe6-41c5121bce7c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005285175s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-444000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qsbsf" [f2cc906c-67b9-4f16-b1bd-4094bff2f86d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006746895s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-444000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-444000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-444000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-444000: (11.059075189s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-444000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-444000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-444000
--- PASS: TestAddons/StoppedEnableDisable (11.80s)

                                                
                                    
x
+
TestCertOptions (27.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-491000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-491000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (23.860771669s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-491000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-491000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-491000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-491000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-491000: (2.501056143s)
--- PASS: TestCertOptions (27.21s)

                                                
                                    
x
+
TestCertExpiration (232.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-205000 --memory=2048 --cert-expiration=3m --driver=docker 
* Starting control plane node minikube in cluster minikube
* Download complete!
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-205000 --memory=2048 --cert-expiration=3m --driver=docker : (23.547170973s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-205000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-205000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.75789749s)
helpers_test.go:175: Cleaning up "cert-expiration-205000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-205000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-205000: (2.600445918s)
--- PASS: TestCertExpiration (232.91s)

                                                
                                    
x
+
TestDockerFlags (27.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-290000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-290000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.811781064s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-290000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-290000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-290000: (2.46093773s)
--- PASS: TestDockerFlags (27.09s)

                                                
                                    
x
+
TestForceSystemdFlag (26.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-375000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-375000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (24.233705471s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-375000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-375000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-375000: (2.28386895s)
--- PASS: TestForceSystemdFlag (26.96s)

                                                
                                    
x
+
TestForceSystemdEnv (24.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-897000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
* Starting control plane node minikube in cluster minikube
* Download complete!
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-897000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (21.988729109s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-897000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-897000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-897000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-897000: (2.483307869s)
--- PASS: TestForceSystemdEnv (24.92s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.74s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.74s)

                                                
                                    
x
+
TestErrorSpam/setup (22.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-768000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-768000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 --driver=docker : (22.486268138s)
--- PASS: TestErrorSpam/setup (22.49s)

                                                
                                    
x
+
TestErrorSpam/start (2.15s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 start --dry-run
--- PASS: TestErrorSpam/start (2.15s)

                                                
                                    
x
+
TestErrorSpam/status (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 status
--- PASS: TestErrorSpam/status (1.31s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (11.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 stop: (10.864858089s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-768000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-768000 stop
--- PASS: TestErrorSpam/stop (11.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/test/nested/copy/38899/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (38.648365287s)
--- PASS: TestFunctional/serial/StartWithProxy (38.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --alsologtostderr -v=8: (38.524028495s)
functional_test.go:659: soft start took 38.524499619s for "functional-525000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-525000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.1: (3.80624824s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.3: (3.643494568s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:latest: (2.627261246s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1661833853/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add minikube-local-cache-test:functional-525000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add minikube-local-cache-test:functional-525000: (1.081561116s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache delete minikube-local-cache-test:functional-525000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-525000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (419.048445ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache reload: (2.144399001s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 kubectl -- --context functional-525000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 kubectl -- --context functional-525000 get pods: (1.25596952s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-525000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-525000 get pods: (1.688059078s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.69s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0213 18:19:40.355266   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.363006   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.373155   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.394650   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.434992   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.515707   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.675889   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:40.997085   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:41.638186   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:42.918347   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:45.478728   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:19:50.598992   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:20:00.838967   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.326071055s)
functional_test.go:757: restart took 40.326236354s for "functional-525000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-525000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 logs: (3.268048067s)
--- PASS: TestFunctional/serial/LogsCmd (3.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3625864004/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3625864004/001/logs.txt: (3.271659581s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-525000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-525000: exit status 115 (613.672646ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32413 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-525000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 config get cpus: exit status 14 (64.34471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 config get cpus: exit status 14 (64.403803ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-525000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-525000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 41913: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (910.016105ms)

                                                
                                                
-- stdout --
	* [functional-525000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:23:57.982808   41794 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:23:57.983019   41794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:23:57.983024   41794 out.go:304] Setting ErrFile to fd 2...
	I0213 18:23:57.983028   41794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:23:57.983241   41794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:23:58.024406   41794 out.go:298] Setting JSON to false
	I0213 18:23:58.056111   41794 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14297,"bootTime":1707863141,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:23:58.056213   41794 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:23:58.108289   41794 out.go:177] * [functional-525000] minikube v1.32.0 on Darwin 14.3.1
	I0213 18:23:58.150366   41794 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 18:23:58.129247   41794 notify.go:220] Checking for updates...
	I0213 18:23:58.192069   41794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:23:58.234235   41794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:23:58.276276   41794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:23:58.318057   41794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 18:23:58.360255   41794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 18:23:58.382053   41794 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:23:58.382809   41794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:23:58.512004   41794 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:23:58.512174   41794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:23:58.645791   41794 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 02:23:58.617875579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:23:58.666970   41794 out.go:177] * Using the docker driver based on existing profile
	I0213 18:23:58.687940   41794 start.go:298] selected driver: docker
	I0213 18:23:58.687964   41794 start.go:902] validating driver "docker" against &{Name:functional-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-525000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:23:58.688059   41794 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 18:23:58.713011   41794 out.go:177] 
	W0213 18:23:58.734037   41794 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 18:23:58.754778   41794 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (835.715985ms)

                                                
                                                
-- stdout --
	* [functional-525000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:23:59.724835   41882 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:23:59.725087   41882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:23:59.725092   41882 out.go:304] Setting ErrFile to fd 2...
	I0213 18:23:59.725097   41882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:23:59.725321   41882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:23:59.727191   41882 out.go:298] Setting JSON to false
	I0213 18:23:59.751012   41882 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14298,"bootTime":1707863141,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 18:23:59.751096   41882 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 18:23:59.772614   41882 out.go:177] * [functional-525000] minikube v1.32.0 sur Darwin 14.3.1
	I0213 18:23:59.851517   41882 out.go:177]   - MINIKUBE_LOCATION=18165
	I0213 18:23:59.830291   41882 notify.go:220] Checking for updates...
	I0213 18:23:59.894174   41882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	I0213 18:23:59.936369   41882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 18:23:59.978300   41882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 18:24:00.020125   41882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	I0213 18:24:00.062293   41882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 18:24:00.105231   41882 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:24:00.106101   41882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 18:24:00.173778   41882 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 18:24:00.173932   41882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 18:24:00.283677   41882 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 02:24:00.27306652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 18:24:00.305018   41882 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0213 18:24:00.363023   41882 start.go:298] selected driver: docker
	I0213 18:24:00.363053   41882 start.go:902] validating driver "docker" against &{Name:functional-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-525000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 18:24:00.363154   41882 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 18:24:00.386860   41882 out.go:177] 
	W0213 18:24:00.407888   41882 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 18:24:00.430915   41882 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [16af7b63-7730-4703-8a54-f7ee896d50da] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004879554s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-525000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-525000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a31dafe9-436d-4e4a-8084-dc871f5d0665] Pending
helpers_test.go:344: "sp-pod" [a31dafe9-436d-4e4a-8084-dc871f5d0665] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a31dafe9-436d-4e4a-8084-dc871f5d0665] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003562267s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-525000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-525000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-525000 delete -f testdata/storage-provisioner/pod.yaml: (1.698031461s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e0fe7718-c623-49ff-bfe1-761166640f15] Pending
helpers_test.go:344: "sp-pod" [e0fe7718-c623-49ff-bfe1-761166640f15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e0fe7718-c623-49ff-bfe1-761166640f15] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004373694s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-525000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp functional-525000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd270551417/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (210.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-525000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ctgl6" [32d20113-c77b-475f-9fdb-915d99c09ef2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ctgl6" [32d20113-c77b-475f-9fdb-915d99c09ef2] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 3m27.004762588s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-525000 exec mysql-859648c796-ctgl6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-525000 exec mysql-859648c796-ctgl6 -- mysql -ppassword -e "show databases;": exit status 1 (128.94408ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-525000 exec mysql-859648c796-ctgl6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-525000 exec mysql-859648c796-ctgl6 -- mysql -ppassword -e "show databases;": exit status 1 (127.225849ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-525000 exec mysql-859648c796-ctgl6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (210.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/38899/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/test/nested/copy/38899/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/38899.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/38899.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/38899.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /usr/share/ca-certificates/38899.pem"
E0213 18:20:21.319217   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/388992.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/388992.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/388992.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /usr/share/ca-certificates/388992.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-525000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "sudo systemctl is-active crio": exit status 1 (553.083536ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.365525935s)
--- PASS: TestFunctional/parallel/License (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-525000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-525000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr:
I0213 18:24:17.030389   42160 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:17.030667   42160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:17.030674   42160 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:17.030678   42160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:17.030873   42160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:17.031512   42160 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:17.031604   42160 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:17.031984   42160 cli_runner.go:164] Run: docker container inspect functional-525000 --format={{.State.Status}}
I0213 18:24:17.085610   42160 ssh_runner.go:195] Run: systemctl --version
I0213 18:24:17.085691   42160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525000
I0213 18:24:17.139334   42160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/functional-525000/id_rsa Username:docker}
I0213 18:24:17.231908   42160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/nginx                     | latest            | 247f7abff9f70 | 187MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-525000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-525000 | 52d93f6b172e1 | 30B    |
| docker.io/library/nginx                     | alpine            | 2b70e4aaac6b5 | 42.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr:
I0213 18:24:21.696317   42197 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:21.696521   42197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:21.696528   42197 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:21.696532   42197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:21.696724   42197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:21.697327   42197 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:21.697421   42197 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:21.697815   42197 cli_runner.go:164] Run: docker container inspect functional-525000 --format={{.State.Status}}
I0213 18:24:21.751101   42197 ssh_runner.go:195] Run: systemctl --version
I0213 18:24:21.751188   42197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525000
I0213 18:24:21.802870   42197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/functional-525000/id_rsa Username:docker}
I0213 18:24:21.897089   42197 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-525000"],"size":"32900000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e
62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"
repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"52d93f6b172e147fa60fa177c2c8892c6369fa1c3623f0545d43a12055e495c1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-525000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1
.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr:
I0213 18:24:21.387905   42189 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:21.388089   42189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:21.388094   42189 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:21.388099   42189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:21.388292   42189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:21.388916   42189 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:21.389018   42189 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:21.389409   42189 cli_runner.go:164] Run: docker container inspect functional-525000 --format={{.State.Status}}
I0213 18:24:21.442057   42189 ssh_runner.go:195] Run: systemctl --version
I0213 18:24:21.442134   42189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525000
I0213 18:24:21.496530   42189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/functional-525000/id_rsa Username:docker}
I0213 18:24:21.590700   42189 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr:
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 52d93f6b172e147fa60fa177c2c8892c6369fa1c3623f0545d43a12055e495c1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-525000
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-525000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr:
I0213 18:24:17.337883   42166 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:17.338076   42166 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:17.338082   42166 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:17.338086   42166 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:17.338284   42166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:17.338918   42166 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:17.339019   42166 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:17.339462   42166 cli_runner.go:164] Run: docker container inspect functional-525000 --format={{.State.Status}}
I0213 18:24:17.392603   42166 ssh_runner.go:195] Run: systemctl --version
I0213 18:24:17.392683   42166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525000
I0213 18:24:17.445105   42166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/functional-525000/id_rsa Username:docker}
I0213 18:24:17.539480   42166 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh pgrep buildkitd: exit status 1 (372.754233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr
2024/02/13 18:24:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr: (4.704904676s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr:
I0213 18:24:18.018194   42182 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:18.018842   42182 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:18.018848   42182 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:18.018853   42182 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:18.019039   42182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:18.019638   42182 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:18.020706   42182 config.go:182] Loaded profile config "functional-525000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 18:24:18.021172   42182 cli_runner.go:164] Run: docker container inspect functional-525000 --format={{.State.Status}}
I0213 18:24:18.074203   42182 ssh_runner.go:195] Run: systemctl --version
I0213 18:24:18.074281   42182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525000
I0213 18:24:18.128260   42182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53284 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/functional-525000/id_rsa Username:docker}
I0213 18:24:18.221581   42182 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2822608082.tar
I0213 18:24:18.221657   42182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 18:24:18.237205   42182 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2822608082.tar
I0213 18:24:18.241575   42182 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2822608082.tar: stat -c "%s %y" /var/lib/minikube/build/build.2822608082.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2822608082.tar': No such file or directory
I0213 18:24:18.241614   42182 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2822608082.tar --> /var/lib/minikube/build/build.2822608082.tar (3072 bytes)
I0213 18:24:18.281862   42182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2822608082
I0213 18:24:18.298737   42182 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2822608082 -xf /var/lib/minikube/build/build.2822608082.tar
I0213 18:24:18.314721   42182 docker.go:360] Building image: /var/lib/minikube/build/build.2822608082
I0213 18:24:18.314799   42182 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-525000 /var/lib/minikube/build/build.2822608082
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.5s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:7fc94a7dde1be31e38907e992fcc5c2e67086eb4815e137cdb2a272dd1f22187 done
#8 naming to localhost/my-image:functional-525000 done
#8 DONE 0.0s
I0213 18:24:22.608515   42182 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-525000 /var/lib/minikube/build/build.2822608082: (4.293764936s)
I0213 18:24:22.608593   42182 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2822608082
I0213 18:24:22.624241   42182 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2822608082.tar
I0213 18:24:22.639276   42182 build_images.go:207] Built localhost/my-image:functional-525000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2822608082.tar
I0213 18:24:22.639323   42182 build_images.go:123] succeeded building to: functional-525000
I0213 18:24:22.639327   42182 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.494637294s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-525000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-525000 docker-env) && out/minikube-darwin-amd64 status -p functional-525000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-525000 docker-env) && out/minikube-darwin-amd64 status -p functional-525000": (1.243606846s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-525000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr: (4.970764474s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr: (2.657073813s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.229838656s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-525000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image load --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr: (3.411868626s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image save gcr.io/google-containers/addon-resizer:functional-525000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image save gcr.io/google-containers/addon-resizer:functional-525000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.14071895s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image rm gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.763829671s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-525000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image save --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image save --daemon gcr.io/google-containers/addon-resizer:functional-525000 --alsologtostderr: (1.190378804s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-525000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (61.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-525000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-525000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-5brpc" [25012ee2-89d8-4e69-bfe3-064b8febe6fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0213 18:21:02.278911   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-5brpc" [25012ee2-89d8-4e69-bfe3-064b8febe6fb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m1.004390443s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (61.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service list -o json
functional_test.go:1490: Took "464.301619ms" to run "out/minikube-darwin-amd64 -p functional-525000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 service --namespace=default --https --url hello-node: signal: killed (15.003071309s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53522

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53522
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 service hello-node --url --format={{.IP}}: signal: killed (15.004379891s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service hello-node --url
E0213 18:22:24.198060   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 service hello-node --url: signal: killed (15.003711846s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53559

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53559
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41596: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [69ddbfe9-8815-4688-9e52-26d9e4596299] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [69ddbfe9-8815-4688-9e52-26d9e4596299] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 36.003754599s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-525000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41626: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "426.079949ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "83.532171ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "427.163654ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "81.608197ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4001364615/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707877437170354000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4001364615/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707877437170354000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4001364615/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707877437170354000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4001364615/001/test-1707877437170354000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.338502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 02:23 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 02:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 02:23 test-1707877437170354000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh cat /mount-9p/test-1707877437170354000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-525000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6305f341-fe2b-45aa-94d3-b593a177498b] Pending
helpers_test.go:344: "busybox-mount" [6305f341-fe2b-45aa-94d3-b593a177498b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6305f341-fe2b-45aa-94d3-b593a177498b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6305f341-fe2b-45aa-94d3-b593a177498b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004507824s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-525000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4001364615/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1282401642/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (477.130112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1282401642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p": exit status 1 (428.52586ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-525000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1282401642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1: exit status 1 (623.699541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-525000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup160066325/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-525000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-525000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-525000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-651000 --driver=docker 
E0213 18:24:40.350248   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-651000 --driver=docker : (21.639680876s)
--- PASS: TestImageBuild/serial/Setup (21.64s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-651000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-651000: (4.316660763s)
--- PASS: TestImageBuild/serial/NormalBuild (4.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-651000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-651000: (1.142575479s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-651000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-651000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-651000: (1.041259833s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-522000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-522000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m15.460959894s)
--- PASS: TestJSONOutput/start/Command (75.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-522000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-522000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-522000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-522000 --output=json --user=testUser: (10.840816587s)
--- PASS: TestJSONOutput/stop/Command (10.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-723000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-723000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (382.83405ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6e12281-1cf8-46fb-b513-8b9a8f8e1b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-723000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2fbd2c7-9c60-461f-a59d-4f5e1dd75a27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18165"}}
	{"specversion":"1.0","id":"00bb96b7-d5ed-4628-ab50-1dc38f0b80ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig"}}
	{"specversion":"1.0","id":"996fde56-3908-499c-aa58-6e8fe2d90445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"c7476d93-7218-4622-9804-76bcb372c369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f958c3c2-0501-4444-976b-6e2bfc64af3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube"}}
	{"specversion":"1.0","id":"88867587-0ac3-4f7a-9236-74ede24b0ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93e73c2b-f4bc-4cfe-9f05-e792f701a040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-723000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-723000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-418000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-418000 --network=: (22.244807344s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-418000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-418000: (2.427939764s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-833000 --network=bridge
E0213 18:35:23.869037   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-833000 --network=bridge: (21.745229107s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-833000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-833000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-833000: (2.261356604s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.06s)

                                                
                                    
x
+
TestKicExistingNetwork (24.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-678000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-678000 --network=existing-network: (22.191701332s)
helpers_test.go:175: Cleaning up "existing-network-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-678000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-678000: (2.11604634s)
--- PASS: TestKicExistingNetwork (24.65s)

                                                
                                    
x
+
TestKicCustomSubnet (24.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-334000 --subnet=192.168.60.0/24
E0213 18:36:03.435079   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-334000 --subnet=192.168.60.0/24: (21.736362208s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-334000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-334000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-334000: (2.420246512s)
--- PASS: TestKicCustomSubnet (24.21s)

                                                
                                    
x
+
TestKicStaticIP (24.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-882000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-882000 --static-ip=192.168.200.200: (22.130726179s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-882000 ip
helpers_test.go:175: Cleaning up "static-ip-882000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-882000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-882000: (2.430332121s)
--- PASS: TestKicStaticIP (24.80s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (51.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-454000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-454000 --driver=docker : (21.74772803s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-456000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-456000 --driver=docker : (22.693358547s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-454000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-456000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-456000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-456000: (2.427768044s)
helpers_test.go:175: Cleaning up "first-454000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-454000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-454000: (2.466318845s)
--- PASS: TestMinikubeProfile (51.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-347000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-347000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.933143381s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-347000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.883636302s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-362000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-347000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-347000 --alsologtostderr -v=5: (2.07915906s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-362000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-362000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-362000: (1.558797247s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-362000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-362000: (5.048547012s)
--- PASS: TestMountStart/serial/RestartStopped (6.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-362000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-315000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-315000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m4.172288194s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (45.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-315000 -- rollout status deployment/busybox: (6.595896982s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0213 18:39:40.386549   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-69nhf -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-l44fr -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-69nhf -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-l44fr -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-69nhf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-l44fr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (45.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-69nhf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-69nhf -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-l44fr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-315000 -- exec busybox-5b5d89c9d6-l44fr -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-315000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-315000 -v 3 --alsologtostderr: (14.337209615s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr: (1.061920827s)
--- PASS: TestMultiNode/serial/AddNode (15.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-315000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 status --output json --alsologtostderr: (1.006836566s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp testdata/cp-test.txt multinode-315000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1266966209/001/cp-test_multinode-315000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000:/home/docker/cp-test.txt multinode-315000-m02:/home/docker/cp-test_multinode-315000_multinode-315000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test_multinode-315000_multinode-315000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000:/home/docker/cp-test.txt multinode-315000-m03:/home/docker/cp-test_multinode-315000_multinode-315000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test_multinode-315000_multinode-315000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp testdata/cp-test.txt multinode-315000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1266966209/001/cp-test_multinode-315000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m02:/home/docker/cp-test.txt multinode-315000:/home/docker/cp-test_multinode-315000-m02_multinode-315000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test_multinode-315000-m02_multinode-315000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m02:/home/docker/cp-test.txt multinode-315000-m03:/home/docker/cp-test_multinode-315000-m02_multinode-315000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test_multinode-315000-m02_multinode-315000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp testdata/cp-test.txt multinode-315000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1266966209/001/cp-test_multinode-315000-m03.txt
E0213 18:40:23.865717   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m03:/home/docker/cp-test.txt multinode-315000:/home/docker/cp-test_multinode-315000-m03_multinode-315000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000 "sudo cat /home/docker/cp-test_multinode-315000-m03_multinode-315000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 cp multinode-315000-m03:/home/docker/cp-test.txt multinode-315000-m02:/home/docker/cp-test_multinode-315000-m03_multinode-315000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 ssh -n multinode-315000-m02 "sudo cat /home/docker/cp-test_multinode-315000-m03_multinode-315000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 node stop m03: (1.513302341s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-315000 status: exit status 7 (760.68619ms)

                                                
                                                
-- stdout --
	multinode-315000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr: exit status 7 (801.773303ms)

                                                
                                                
-- stdout --
	multinode-315000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:40:29.676973   45547 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:40:29.677261   45547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:40:29.677266   45547 out.go:304] Setting ErrFile to fd 2...
	I0213 18:40:29.677271   45547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:40:29.677497   45547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:40:29.677721   45547 out.go:298] Setting JSON to false
	I0213 18:40:29.677745   45547 mustload.go:65] Loading cluster: multinode-315000
	I0213 18:40:29.677776   45547 notify.go:220] Checking for updates...
	I0213 18:40:29.678122   45547 config.go:182] Loaded profile config "multinode-315000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:40:29.678134   45547 status.go:255] checking status of multinode-315000 ...
	I0213 18:40:29.678623   45547 cli_runner.go:164] Run: docker container inspect multinode-315000 --format={{.State.Status}}
	I0213 18:40:29.731657   45547 status.go:330] multinode-315000 host status = "Running" (err=<nil>)
	I0213 18:40:29.731694   45547 host.go:66] Checking if "multinode-315000" exists ...
	I0213 18:40:29.731937   45547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315000
	I0213 18:40:29.784305   45547 host.go:66] Checking if "multinode-315000" exists ...
	I0213 18:40:29.784594   45547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 18:40:29.784676   45547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315000
	I0213 18:40:29.837269   45547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54173 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/multinode-315000/id_rsa Username:docker}
	I0213 18:40:29.933525   45547 ssh_runner.go:195] Run: systemctl --version
	I0213 18:40:29.938420   45547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 18:40:29.956289   45547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-315000
	I0213 18:40:30.009033   45547 kubeconfig.go:92] found "multinode-315000" server: "https://127.0.0.1:54177"
	I0213 18:40:30.009064   45547 api_server.go:166] Checking apiserver status ...
	I0213 18:40:30.009110   45547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 18:40:30.026269   45547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2242/cgroup
	W0213 18:40:30.042583   45547 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2242/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0213 18:40:30.042686   45547 ssh_runner.go:195] Run: ls
	I0213 18:40:30.047114   45547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54177/healthz ...
	I0213 18:40:30.052824   45547 api_server.go:279] https://127.0.0.1:54177/healthz returned 200:
	ok
	I0213 18:40:30.052841   45547 status.go:421] multinode-315000 apiserver status = Running (err=<nil>)
	I0213 18:40:30.052850   45547 status.go:257] multinode-315000 status: &{Name:multinode-315000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 18:40:30.052861   45547 status.go:255] checking status of multinode-315000-m02 ...
	I0213 18:40:30.053108   45547 cli_runner.go:164] Run: docker container inspect multinode-315000-m02 --format={{.State.Status}}
	I0213 18:40:30.146133   45547 status.go:330] multinode-315000-m02 host status = "Running" (err=<nil>)
	I0213 18:40:30.146159   45547 host.go:66] Checking if "multinode-315000-m02" exists ...
	I0213 18:40:30.146399   45547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315000-m02
	I0213 18:40:30.199326   45547 host.go:66] Checking if "multinode-315000-m02" exists ...
	I0213 18:40:30.199583   45547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 18:40:30.199632   45547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315000-m02
	I0213 18:40:30.252305   45547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54216 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/multinode-315000-m02/id_rsa Username:docker}
	I0213 18:40:30.347222   45547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 18:40:30.364537   45547 status.go:257] multinode-315000-m02 status: &{Name:multinode-315000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0213 18:40:30.364558   45547 status.go:255] checking status of multinode-315000-m03 ...
	I0213 18:40:30.364843   45547 cli_runner.go:164] Run: docker container inspect multinode-315000-m03 --format={{.State.Status}}
	I0213 18:40:30.418975   45547 status.go:330] multinode-315000-m03 host status = "Stopped" (err=<nil>)
	I0213 18:40:30.419000   45547 status.go:343] host is not running, skipping remaining checks
	I0213 18:40:30.419006   45547 status.go:257] multinode-315000-m03 status: &{Name:multinode-315000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 node start m03 --alsologtostderr: (13.178489157s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-315000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-315000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-315000: (22.903931421s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-315000 --wait=true -v=8 --alsologtostderr
E0213 18:41:46.966641   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-315000 --wait=true -v=8 --alsologtostderr: (1m19.08772655s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-315000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 node delete m03: (5.159626395s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-315000 stop: (21.476836785s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-315000 status: exit status 7 (159.963082ms)

                                                
                                                
-- stdout --
	multinode-315000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr: exit status 7 (159.571489ms)

                                                
                                                
-- stdout --
	multinode-315000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 18:42:54.533372   46000 out.go:291] Setting OutFile to fd 1 ...
	I0213 18:42:54.533651   46000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:42:54.533656   46000 out.go:304] Setting ErrFile to fd 2...
	I0213 18:42:54.533661   46000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 18:42:54.533842   46000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
	I0213 18:42:54.534031   46000 out.go:298] Setting JSON to false
	I0213 18:42:54.534055   46000 mustload.go:65] Loading cluster: multinode-315000
	I0213 18:42:54.534083   46000 notify.go:220] Checking for updates...
	I0213 18:42:54.534373   46000 config.go:182] Loaded profile config "multinode-315000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 18:42:54.534384   46000 status.go:255] checking status of multinode-315000 ...
	I0213 18:42:54.534800   46000 cli_runner.go:164] Run: docker container inspect multinode-315000 --format={{.State.Status}}
	I0213 18:42:54.585900   46000 status.go:330] multinode-315000 host status = "Stopped" (err=<nil>)
	I0213 18:42:54.585941   46000 status.go:343] host is not running, skipping remaining checks
	I0213 18:42:54.585950   46000 status.go:257] multinode-315000 status: &{Name:multinode-315000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 18:42:54.585982   46000 status.go:255] checking status of multinode-315000-m02 ...
	I0213 18:42:54.586243   46000 cli_runner.go:164] Run: docker container inspect multinode-315000-m02 --format={{.State.Status}}
	I0213 18:42:54.637338   46000 status.go:330] multinode-315000-m02 host status = "Stopped" (err=<nil>)
	I0213 18:42:54.637363   46000 status.go:343] host is not running, skipping remaining checks
	I0213 18:42:54.637372   46000 status.go:257] multinode-315000-m02 status: &{Name:multinode-315000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-315000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-315000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m21.703787405s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-315000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-315000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-315000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-315000-m02 --driver=docker : exit status 14 (430.747545ms)

                                                
                                                
-- stdout --
	* [multinode-315000-m02] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-315000-m02' is duplicated with machine name 'multinode-315000-m02' in profile 'multinode-315000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-315000-m03 --driver=docker 
E0213 18:44:40.363675   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-315000-m03 --driver=docker : (23.074243662s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-315000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-315000: exit status 80 (495.696241ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-315000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-315000-m03 already exists in multinode-315000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-315000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-315000-m03: (2.477513003s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.54s)

                                                
                                    
x
+
TestPreload (157.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-327000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0213 18:45:23.843844   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-327000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m16.509115088s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-327000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-327000 image pull gcr.io/k8s-minikube/busybox: (5.384625471s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-327000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-327000: (10.824766037s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-327000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-327000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m1.612745447s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-327000 image list
helpers_test.go:175: Cleaning up "test-preload-327000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-327000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-327000: (2.459200403s)
--- PASS: TestPreload (157.09s)

                                                
                                    
x
+
TestScheduledStopUnix (96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-985000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-985000 --memory=2048 --driver=docker : (21.876124865s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-985000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-985000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-985000: exit status 7 (109.604709ms)

                                                
                                                
-- stdout --
	scheduled-stop-985000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000: exit status 7 (107.075008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-985000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-985000: (2.144790579s)
--- PASS: TestScheduledStopUnix (96.00s)

                                                
                                    
x
+
TestInsufficientStorage (10.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-966000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-966000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.697702873s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ea36cbd-0b84-41e1-b229-9b773e4f5dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-966000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f07aeff6-afba-450c-9051-5716f0be2dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18165"}}
	{"specversion":"1.0","id":"a0551e93-1266-4f59-ac39-a98cbc313479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig"}}
	{"specversion":"1.0","id":"466f7f57-ed08-4f08-bf10-a5906ad21e9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5dc0e2e9-4cc5-49eb-a560-134531690777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"78a16009-5dd4-4431-a17a-c11e09622ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube"}}
	{"specversion":"1.0","id":"f2b910f9-27dd-40f0-a016-d3e23b37ca99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9da6e715-111c-4db1-a79d-43a83df6d495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"aab933c5-1033-4b16-85b3-8d34a9c6bc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fb002b14-ace2-447a-82fb-e96dfef1df66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7c936a0-bdc7-4b30-933c-c5c174c72ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d074dd7e-d488-471a-80ae-f1635a852915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-966000 in cluster insufficient-storage-966000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6487c354-eac7-410c-9890-92d49e83d6c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7938ac2b-0ff4-4e4c-bdb2-0fcd143a6eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cd47903-fe9f-4674-a12d-294f0cf424d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-966000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-966000 --output=json --layout=cluster: exit status 7 (397.028232ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-966000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-966000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 18:54:28.142141   47396 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-966000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-966000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-966000 --output=json --layout=cluster: exit status 7 (398.923904ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-966000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-966000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 18:54:28.541772   47406 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-966000" does not appear in /Users/jenkins/minikube-integration/18165-38421/kubeconfig
	E0213 18:54:28.557989   47406 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/insufficient-storage-966000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-966000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-966000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-966000: (2.231927201s)
--- PASS: TestInsufficientStorage (10.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1087890082 start -p running-upgrade-323000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:120: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1087890082 start -p running-upgrade-323000 --memory=2200 --vm-driver=docker : (32.572539536s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-323000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-323000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (47.298147881s)
helpers_test.go:175: Cleaning up "running-upgrade-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-323000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-323000: (3.018680312s)
--- PASS: TestRunningBinaryUpgrade (87.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (202.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0213 18:54:40.354127   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.448159101 start -p missing-upgrade-807000 --memory=2200 --driver=docker 
version_upgrade_test.go:309: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.448159101 start -p missing-upgrade-807000 --memory=2200 --driver=docker : (2m10.205537372s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-807000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-807000: (10.242690022s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-807000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-807000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:329: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-807000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (55.026332593s)
helpers_test.go:175: Cleaning up "missing-upgrade-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-807000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-807000: (2.49854467s)
--- PASS: TestMissingContainerUpgrade (202.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2873302561 start -p stopped-upgrade-064000 --memory=2200 --vm-driver=docker 
E0213 18:58:26.988470   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2873302561 start -p stopped-upgrade-064000 --memory=2200 --vm-driver=docker : (31.365049589s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2873302561 -p stopped-upgrade-064000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2873302561 -p stopped-upgrade-064000 stop: (12.274043276s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (30.332908769s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-064000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-064000: (2.946244657s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.95s)

                                                
                                    
x
+
TestPause/serial/Start (39.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-022000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-022000 --memory=2048 --install-addons=false --wait=all --driver=docker : (39.069102198s)
--- PASS: TestPause/serial/Start (39.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (528.574697ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-341000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18165
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-341000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-341000 --driver=docker : (24.406338206s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-341000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --driver=docker : (6.059474021s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-341000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-341000 status -o json: exit status 2 (415.757085ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-341000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-341000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-341000: (2.431101055s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-341000 --no-kubernetes --driver=docker : (7.815652793s)
--- PASS: TestNoKubernetes/serial/Start (7.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-022000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-022000 --alsologtostderr -v=1 --driver=docker : (40.684245016s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-341000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-341000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.312283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (14.039595138s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-341000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-341000: (1.567193761s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-341000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-341000 --driver=docker : (8.06405522s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-341000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-341000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.348147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-022000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-022000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-022000 --output=json --layout=cluster: exit status 2 (451.457174ms)

                                                
                                                
-- stdout --
	{"Name":"pause-022000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-022000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-022000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-022000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-022000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-022000 --alsologtostderr -v=5: (2.583604258s)
--- PASS: TestPause/serial/DeletePaused (2.58s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-022000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-022000: exit status 1 (52.920312ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-022000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18165
- KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1400817529/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1400817529/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1400817529/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1400817529/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (24.64s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18165
- KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3726991265/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3726991265/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3726991265/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3726991265/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (24.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (35.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (35.921721143s)
--- PASS: TestNetworkPlugins/group/auto/Start (35.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4xdfd" [9f9b22c6-9928-4fde-8beb-d299b2d22386] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 19:04:40.404342   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4xdfd" [9f9b22c6-9928-4fde-8beb-d299b2d22386] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.006526074s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0213 19:05:23.884872   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m5.888484114s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9hn8b" [041c4e31-baac-4ade-8a81-3770ed003006] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005182614s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2gfw6" [d5d8e1a4-0740-4ff4-873d-8a7ae154cd0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2gfw6" [d5d8e1a4-0740-4ff4-873d-8a7ae154cd0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.005781835s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (54.933962185s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (40.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (40.214057661s)
--- PASS: TestNetworkPlugins/group/false/Start (40.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gd6t8" [b982ba51-3e74-493c-bc28-819e498363fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gd6t8" [b982ba51-3e74-493c-bc28-819e498363fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.004905419s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mnzlb" [9390d70a-557c-4d27-a8a7-dc82eec87434] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mnzlb" [9390d70a-557c-4d27-a8a7-dc82eec87434] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003741875s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (52.505967156s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (53.367222074s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-62bfx" [d92afa9d-257d-4d2c-aaf2-55db9d9da08f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00434552s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zzlj5" [22965711-49ad-4e76-9dea-a51e367542f9] Running
E0213 19:09:23.450126   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004945031s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n6zxf" [f61612d9-b618-48cf-994f-c9bb8eb4da5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n6zxf" [f61612d9-b618-48cf-994f-c9bb8eb4da5f] Running
E0213 19:09:34.012446   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.017653   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.028104   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.048561   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.088801   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.168921   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.329133   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:34.649275   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:35.289476   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:09:36.570374   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005165926s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4pwjb" [36e3675e-503c-4eb0-94a9-3a2090a165a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4pwjb" [36e3675e-503c-4eb0-94a9-3a2090a165a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.006136652s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (1m18.484938892s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0213 19:10:14.972642   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:10:23.881913   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (37.585225449s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-48fb5" [c520e543-d432-415a-abab-589caf48a48d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-48fb5" [c520e543-d432-415a-abab-589caf48a48d] Running
E0213 19:10:55.933185   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005976027s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n4qgl" [8d19ab3b-d14c-4dbc-b084-43b50a65e99f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n4qgl" [8d19ab3b-d14c-4dbc-b084-43b50a65e99f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.005432927s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (38.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0213 19:11:27.917707   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-210000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (38.203757368s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (38.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-210000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-210000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87rs6" [5c8b77bb-8e74-44db-a791-7f1bb7a8514a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-87rs6" [5c8b77bb-8e74-44db-a791-7f1bb7a8514a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.111447635s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-210000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-210000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0213 19:12:17.853202   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-867000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0213 19:12:46.663532   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.668643   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.679094   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.699740   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.741048   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.821957   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:46.984044   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:47.304823   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:47.945076   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:49.225716   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:49.230604   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.236929   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.246996   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.267220   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.307448   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.389012   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.550384   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:49.871365   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:50.511677   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:51.786082   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:51.792067   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:54.352490   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:12:56.906962   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:12:59.472701   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:13:07.147766   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:13:09.712799   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:13:27.627825   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:13:30.192875   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:14:01.519246   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
E0213 19:14:08.587808   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:14:11.153298   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:14:18.056820   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.062040   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.072213   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.092348   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.134499   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.214636   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.375097   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:18.695330   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:19.336362   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:20.617305   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-867000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (1m41.060020909s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-867000 create -f testdata/busybox.yaml
E0213 19:14:22.311586   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:22.317112   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:22.327560   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0213 19:14:22.347803   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2d36cac3-f433-414c-9931-44321edab439] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0213 19:14:22.389259   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:22.469869   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:22.630043   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:22.951153   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:23.177469   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:23.591940   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:24.874255   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:27.434431   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:28.298286   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2d36cac3-f433-414c-9931-44321edab439] Running
E0213 19:14:32.555322   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:14:34.011221   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.003079923s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-867000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-867000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-867000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079885003s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-867000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-867000 --alsologtostderr -v=3
E0213 19:14:38.540385   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:14:40.400597   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 19:14:42.795499   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-867000 --alsologtostderr -v=3: (10.900709499s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-867000 -n no-preload-867000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-867000 -n no-preload-867000: exit status 7 (108.606167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-867000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-867000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0213 19:14:59.020714   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:15:01.692452   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:15:03.276163   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:15:06.982011   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 19:15:23.879526   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 19:15:30.508296   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:15:33.073232   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:15:39.980892   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:15:44.236700   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:15:47.759870   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:47.765039   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:47.775824   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:47.796602   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:47.836774   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:47.916977   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:48.077202   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:48.397738   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:49.038462   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:50.319872   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:52.880653   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:15:58.000849   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:16:08.241322   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-867000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m35.033105814s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-867000 -n no-preload-867000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-187000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-187000 --alsologtostderr -v=3: (1.549916507s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (109.640453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-187000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4fmh9" [6fa42a25-11be-4077-86f7-79bd55a69717] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0213 19:20:23.923858   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4fmh9" [6fa42a25-11be-4077-86f7-79bd55a69717] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.0038883s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4fmh9" [6fa42a25-11be-4077-86f7-79bd55a69717] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003927218s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-867000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-867000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-867000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-867000 -n no-preload-867000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-867000 -n no-preload-867000: exit status 2 (431.954876ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-867000 -n no-preload-867000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-867000 -n no-preload-867000: exit status 2 (424.126796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-867000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-867000 -n no-preload-867000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-867000 -n no-preload-867000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-815000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0213 19:21:15.488993   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:21:17.719757   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
E0213 19:21:21.701658   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:21:49.385664   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
E0213 19:22:03.313705   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-815000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (1m16.362614987s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-815000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba600847-72e2-47ba-be4e-cd8c69f4e9d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ba600847-72e2-47ba-be4e-cd8c69f4e9d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004220608s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-815000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-815000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-815000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16976461s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-815000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-815000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-815000 --alsologtostderr -v=3: (11.079215777s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-815000 -n embed-certs-815000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-815000 -n embed-certs-815000: exit status 7 (120.22038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-815000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0213 19:22:31.001578   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kubenet-210000/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (330.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-815000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0213 19:22:46.705762   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/false-210000/client.crt: no such file or directory
E0213 19:22:49.274851   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/custom-flannel-210000/client.crt: no such file or directory
E0213 19:24:18.098605   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/kindnet-210000/client.crt: no such file or directory
E0213 19:24:22.354306   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:24:22.383588   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.389217   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.399491   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.420461   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.461562   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.542239   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:22.702516   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:23.023230   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:23.663493   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:24.943917   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:27.504389   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:32.624607   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:24:34.053388   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:24:40.443311   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 19:24:42.865246   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:25:03.345894   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:25:23.922596   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 19:25:44.306054   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
E0213 19:25:47.803324   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
E0213 19:25:57.096263   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
E0213 19:26:03.490887   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 19:26:17.718992   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/calico-210000/client.crt: no such file or directory
E0213 19:26:21.700973   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/enable-default-cni-210000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-815000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (5m30.515326733s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-815000 -n embed-certs-815000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (330.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8bwt6" [cf157539-6c46-45f7-9ed3-0d7584d81532] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8bwt6" [cf157539-6c46-45f7-9ed3-0d7584d81532] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004375369s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8bwt6" [cf157539-6c46-45f7-9ed3-0d7584d81532] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00485566s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-815000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-815000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-815000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-815000 -n embed-certs-815000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-815000 -n embed-certs-815000: exit status 2 (428.390629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-815000 -n embed-certs-815000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-815000 -n embed-certs-815000: exit status 2 (429.09639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-815000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-815000 -n embed-certs-815000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-815000 -n embed-certs-815000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-069000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-069000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (45.533433329s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [736ad504-aef5-41a7-8881-b122966055cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0213 19:29:22.353668   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/flannel-210000/client.crt: no such file or directory
E0213 19:29:22.382717   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/no-preload-867000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [736ad504-aef5-41a7-8881-b122966055cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.006811164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-069000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-069000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.207898085s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-069000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-069000 --alsologtostderr -v=3
E0213 19:29:34.054401   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/auto-210000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-069000 --alsologtostderr -v=3: (10.922509588s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000: exit status 7 (112.054615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-069000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-069000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-069000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m37.889754656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qwq92" [bbb2b826-caad-4acb-9b17-2bf76e09a2df] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0213 19:35:23.994222   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qwq92" [bbb2b826-caad-4acb-9b17-2bf76e09a2df] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.006164646s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qwq92" [bbb2b826-caad-4acb-9b17-2bf76e09a2df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005965615s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-069000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-069000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-069000 --alsologtostderr -v=1
E0213 19:35:47.875221   38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/bridge-210000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000: exit status 2 (434.777974ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000: exit status 2 (427.996645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-069000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-069000 -n default-k8s-diff-port-069000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-886000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-886000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (35.394416896s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-886000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-886000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067585381s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-886000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-886000 --alsologtostderr -v=3: (10.911370528s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-886000 -n newest-cni-886000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-886000 -n newest-cni-886000: exit status 7 (109.628335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-886000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-886000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-886000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (29.64761762s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-886000 -n newest-cni-886000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-886000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-886000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-886000 -n newest-cni-886000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-886000 -n newest-cni-886000: exit status 2 (447.947276ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-886000 -n newest-cni-886000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-886000 -n newest-cni-886000: exit status 2 (435.042717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-886000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-886000 -n newest-cni-886000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-886000 -n newest-cni-886000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                    

Test skip (21/333)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.043098ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-b9lvd" [e3ea9b09-99e0-486d-885a-5998ef861bab] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004491294s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jhdnv" [0a75be7d-3b73-4a9c-a86d-f2cdf0fa617d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.089035276s
addons_test.go:340: (dbg) Run:  kubectl --context addons-444000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-444000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-444000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.442399948s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.62s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-444000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-444000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-444000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [315a21c2-8605-47e3-93cb-68314eb333c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [315a21c2-8605-47e3-93cb-68314eb333c3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004919852s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-444000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-525000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-525000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wk5qg" [d8d563f9-86b8-47dd-8267-e4e10981875d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wk5qg" [d8d563f9-86b8-47dd-8267-e4e10981875d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.006834904s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-210000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-210000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-210000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-210000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210000"

                                                
                                                
----------------------- debugLogs end: cilium-210000 [took: 6.564940462s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-210000
--- SKIP: TestNetworkPlugins/group/cilium (7.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-377000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-377000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard