Test Report: Docker_macOS 17348

                    
                      45bf4980d68735837852807807c59e04345b65bd:2023-10-03:31286
                    
                

Test fail (5/153)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (262.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1003 18:16:53.284484   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:09.439540   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:23.949591   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.955601   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.967197   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.988387   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.029468   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.109703   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.271259   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.593489   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:25.234579   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:26.515649   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:29.076656   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:34.198463   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:37.126118   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:44.439043   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:20:04.919496   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:20:45.880386   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m22.614755911s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-022000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-022000 in cluster ingress-addon-legacy-022000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:16:49.640576   25198 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:16:49.640854   25198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:16:49.640859   25198 out.go:309] Setting ErrFile to fd 2...
	I1003 18:16:49.640863   25198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:16:49.641035   25198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	I1003 18:16:49.642501   25198 out.go:303] Setting JSON to false
	I1003 18:16:49.664122   25198 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6378,"bootTime":1696375831,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 18:16:49.664215   25198 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:16:49.685843   25198 out.go:177] * [ingress-addon-legacy-022000] minikube v1.31.2 on Darwin 14.0
	I1003 18:16:49.728685   25198 out.go:177]   - MINIKUBE_LOCATION=17348
	I1003 18:16:49.728772   25198 notify.go:220] Checking for updates...
	I1003 18:16:49.772501   25198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	I1003 18:16:49.793707   25198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:16:49.815776   25198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:16:49.836593   25198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	I1003 18:16:49.857650   25198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:16:49.879146   25198 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:16:49.936345   25198 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:16:49.936493   25198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:16:50.036679   25198 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:16:50.02565516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:16:50.058543   25198 out.go:177] * Using the docker driver based on user configuration
	I1003 18:16:50.079777   25198 start.go:298] selected driver: docker
	I1003 18:16:50.079806   25198 start.go:902] validating driver "docker" against <nil>
	I1003 18:16:50.079820   25198 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:16:50.084143   25198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:16:50.183711   25198 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:16:50.172652504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:16:50.183887   25198 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 18:16:50.184071   25198 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:16:50.205641   25198 out.go:177] * Using Docker Desktop driver with root privileges
	I1003 18:16:50.227183   25198 cni.go:84] Creating CNI manager for ""
	I1003 18:16:50.227219   25198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 18:16:50.227231   25198 start_flags.go:321] config:
	{Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:16:50.249419   25198 out.go:177] * Starting control plane node ingress-addon-legacy-022000 in cluster ingress-addon-legacy-022000
	I1003 18:16:50.292087   25198 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:16:50.313359   25198 out.go:177] * Pulling base image ...
	I1003 18:16:50.356170   25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:16:50.356200   25198 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon
	I1003 18:16:50.407315   25198 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon, skipping pull
	I1003 18:16:50.407337   25198 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 exists in daemon, skipping load
	I1003 18:16:50.407912   25198 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1003 18:16:50.407922   25198 cache.go:57] Caching tarball of preloaded images
	I1003 18:16:50.408104   25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:16:50.429359   25198 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1003 18:16:50.471341   25198 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:16:50.557790   25198 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1003 18:16:55.729731   25198 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:16:55.729910   25198 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:16:56.352751   25198 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1003 18:16:56.352989   25198 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json ...
	I1003 18:16:56.353013   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json: {Name:mkacce9391d23aa34ae0a7fb95ec37646fa4ab22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:16:56.353349   25198 cache.go:195] Successfully downloaded all kic artifacts
	I1003 18:16:56.353375   25198 start.go:365] acquiring machines lock for ingress-addon-legacy-022000: {Name:mkdafd119bcc1cfcbf80d8d66936b93f4444fb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:16:56.353499   25198 start.go:369] acquired machines lock for "ingress-addon-legacy-022000" in 115.682µs
	I1003 18:16:56.353519   25198 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 18:16:56.353568   25198 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:16:56.389030   25198 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1003 18:16:56.389370   25198 start.go:159] libmachine.API.Create for "ingress-addon-legacy-022000" (driver="docker")
	I1003 18:16:56.389435   25198 client.go:168] LocalClient.Create starting
	I1003 18:16:56.389597   25198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem
	I1003 18:16:56.389674   25198 main.go:141] libmachine: Decoding PEM data...
	I1003 18:16:56.389705   25198 main.go:141] libmachine: Parsing certificate...
	I1003 18:16:56.389802   25198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem
	I1003 18:16:56.389866   25198 main.go:141] libmachine: Decoding PEM data...
	I1003 18:16:56.389892   25198 main.go:141] libmachine: Parsing certificate...
	I1003 18:16:56.390786   25198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:16:56.444838   25198 cli_runner.go:211] docker network inspect ingress-addon-legacy-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:16:56.444945   25198 network_create.go:281] running [docker network inspect ingress-addon-legacy-022000] to gather additional debugging logs...
	I1003 18:16:56.444962   25198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022000
	W1003 18:16:56.496029   25198 cli_runner.go:211] docker network inspect ingress-addon-legacy-022000 returned with exit code 1
	I1003 18:16:56.496069   25198 network_create.go:284] error running [docker network inspect ingress-addon-legacy-022000]: docker network inspect ingress-addon-legacy-022000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-022000 not found
	I1003 18:16:56.496085   25198 network_create.go:286] output of [docker network inspect ingress-addon-legacy-022000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-022000 not found
	
	** /stderr **
	I1003 18:16:56.496252   25198 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:16:56.546975   25198 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006e4330}
	I1003 18:16:56.547010   25198 network_create.go:124] attempt to create docker network ingress-addon-legacy-022000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1003 18:16:56.547083   25198 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 ingress-addon-legacy-022000
	I1003 18:16:56.633811   25198 network_create.go:108] docker network ingress-addon-legacy-022000 192.168.49.0/24 created
	I1003 18:16:56.633863   25198 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-022000" container
	I1003 18:16:56.633981   25198 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:16:56.684937   25198 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-022000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:16:56.736818   25198 oci.go:103] Successfully created a docker volume ingress-addon-legacy-022000
	I1003 18:16:56.736962   25198 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-022000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --entrypoint /usr/bin/test -v ingress-addon-legacy-022000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -d /var/lib
	I1003 18:16:57.155206   25198 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-022000
	I1003 18:16:57.155255   25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:16:57.155269   25198 kic.go:190] Starting extracting preloaded images to volume ...
	I1003 18:16:57.155374   25198 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:17:00.087874   25198 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -I lz4 -xf /preloaded.tar -C /extractDir: (2.932433397s)
	I1003 18:17:00.087895   25198 kic.go:199] duration metric: took 2.932617 seconds to extract preloaded images to volume
	I1003 18:17:00.088005   25198 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:17:00.187276   25198 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-022000 --name ingress-addon-legacy-022000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --network ingress-addon-legacy-022000 --ip 192.168.49.2 --volume ingress-addon-legacy-022000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880
	I1003 18:17:00.481927   25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Running}}
	I1003 18:17:00.536778   25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:17:00.592055   25198 cli_runner.go:164] Run: docker exec ingress-addon-legacy-022000 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:17:00.711591   25198 oci.go:144] the created container "ingress-addon-legacy-022000" has a running status.
	I1003 18:17:00.711637   25198 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa...
	I1003 18:17:01.179067   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 18:17:01.179120   25198 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:17:01.238055   25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:17:01.289355   25198 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:17:01.289374   25198 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-022000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:17:01.382499   25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:17:01.433890   25198 machine.go:88] provisioning docker machine ...
	I1003 18:17:01.433932   25198 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-022000"
	I1003 18:17:01.434042   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:01.484944   25198 main.go:141] libmachine: Using SSH client type: native
	I1003 18:17:01.485281   25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 56379 <nil> <nil>}
	I1003 18:17:01.485298   25198 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-022000 && echo "ingress-addon-legacy-022000" | sudo tee /etc/hostname
	I1003 18:17:01.626273   25198 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-022000
	
	I1003 18:17:01.626370   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:01.678077   25198 main.go:141] libmachine: Using SSH client type: native
	I1003 18:17:01.678382   25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 56379 <nil> <nil>}
	I1003 18:17:01.678396   25198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-022000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-022000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-022000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:17:01.804619   25198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:17:01.804652   25198 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17348-21848/.minikube CaCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17348-21848/.minikube}
	I1003 18:17:01.804670   25198 ubuntu.go:177] setting up certificates
	I1003 18:17:01.804679   25198 provision.go:83] configureAuth start
	I1003 18:17:01.804783   25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
	I1003 18:17:01.855832   25198 provision.go:138] copyHostCerts
	I1003 18:17:01.855870   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem
	I1003 18:17:01.855918   25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem, removing ...
	I1003 18:17:01.855928   25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem
	I1003 18:17:01.856038   25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem (1123 bytes)
	I1003 18:17:01.856224   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem
	I1003 18:17:01.856264   25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem, removing ...
	I1003 18:17:01.856268   25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem
	I1003 18:17:01.856366   25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem (1675 bytes)
	I1003 18:17:01.856515   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem
	I1003 18:17:01.856542   25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem, removing ...
	I1003 18:17:01.856547   25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem
	I1003 18:17:01.856613   25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem (1078 bytes)
	I1003 18:17:01.856748   25198 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-022000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-022000]
	I1003 18:17:02.112089   25198 provision.go:172] copyRemoteCerts
	I1003 18:17:02.112153   25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:17:02.112212   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:02.163237   25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:17:02.256750   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 18:17:02.256827   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 18:17:02.279685   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 18:17:02.279778   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1003 18:17:02.302579   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 18:17:02.302657   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:17:02.325699   25198 provision.go:86] duration metric: configureAuth took 521.003121ms
	I1003 18:17:02.325714   25198 ubuntu.go:193] setting minikube options for container-runtime
	I1003 18:17:02.325856   25198 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:17:02.325922   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:02.377420   25198 main.go:141] libmachine: Using SSH client type: native
	I1003 18:17:02.377736   25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 56379 <nil> <nil>}
	I1003 18:17:02.377756   25198 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 18:17:02.504908   25198 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1003 18:17:02.504924   25198 ubuntu.go:71] root file system type: overlay
	I1003 18:17:02.505032   25198 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 18:17:02.505143   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:02.556500   25198 main.go:141] libmachine: Using SSH client type: native
	I1003 18:17:02.556804   25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 56379 <nil> <nil>}
	I1003 18:17:02.556870   25198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 18:17:02.694333   25198 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 18:17:02.694427   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:02.746526   25198 main.go:141] libmachine: Using SSH client type: native
	I1003 18:17:02.746825   25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 56379 <nil> <nil>}
	I1003 18:17:02.746838   25198 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 18:17:03.388283   25198 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-04 01:17:02.692023259 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1003 18:17:03.388310   25198 machine.go:91] provisioned docker machine in 1.954392749s
	I1003 18:17:03.388318   25198 client.go:171] LocalClient.Create took 6.998855255s
	I1003 18:17:03.388338   25198 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-022000" took 6.998951411s
	I1003 18:17:03.388348   25198 start.go:300] post-start starting for "ingress-addon-legacy-022000" (driver="docker")
	I1003 18:17:03.388360   25198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:17:03.388431   25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:17:03.388523   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:03.441765   25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:17:03.538202   25198 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:17:03.542510   25198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:17:03.542534   25198 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1003 18:17:03.542542   25198 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1003 18:17:03.542550   25198 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1003 18:17:03.542560   25198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17348-21848/.minikube/addons for local assets ...
	I1003 18:17:03.542677   25198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17348-21848/.minikube/files for local assets ...
	I1003 18:17:03.542849   25198 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> 223182.pem in /etc/ssl/certs
	I1003 18:17:03.542855   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> /etc/ssl/certs/223182.pem
	I1003 18:17:03.543038   25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:17:03.552229   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem --> /etc/ssl/certs/223182.pem (1708 bytes)
	I1003 18:17:03.575082   25198 start.go:303] post-start completed in 186.723971ms
	I1003 18:17:03.575637   25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
	I1003 18:17:03.627434   25198 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json ...
	I1003 18:17:03.627904   25198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:17:03.627961   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:03.679095   25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:17:03.768348   25198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:17:03.773784   25198 start.go:128] duration metric: createHost completed in 7.420174026s
	I1003 18:17:03.773804   25198 start.go:83] releasing machines lock for "ingress-addon-legacy-022000", held for 7.420277748s
	I1003 18:17:03.773880   25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
	I1003 18:17:03.825420   25198 ssh_runner.go:195] Run: cat /version.json
	I1003 18:17:03.825443   25198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:17:03.825509   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:03.825519   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:03.878755   25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:17:03.878759   25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:17:04.069520   25198 ssh_runner.go:195] Run: systemctl --version
	I1003 18:17:04.075074   25198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1003 18:17:04.080502   25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1003 18:17:04.105738   25198 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1003 18:17:04.105817   25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 18:17:04.123251   25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 18:17:04.140326   25198 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:17:04.140340   25198 start.go:469] detecting cgroup driver to use...
	I1003 18:17:04.140353   25198 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1003 18:17:04.140509   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:17:04.156840   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1003 18:17:04.167414   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 18:17:04.177797   25198 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 18:17:04.177858   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 18:17:04.188315   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 18:17:04.198902   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 18:17:04.209581   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 18:17:04.220102   25198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:17:04.229998   25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 18:17:04.240729   25198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:17:04.249874   25198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:17:04.258905   25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:17:04.317201   25198 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 18:17:04.397916   25198 start.go:469] detecting cgroup driver to use...
	I1003 18:17:04.397935   25198 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1003 18:17:04.398013   25198 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 18:17:04.410188   25198 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1003 18:17:04.410253   25198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 18:17:04.422474   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:17:04.440434   25198 ssh_runner.go:195] Run: which cri-dockerd
	I1003 18:17:04.445362   25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 18:17:04.456330   25198 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 18:17:04.501480   25198 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 18:17:04.596125   25198 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 18:17:04.656388   25198 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 18:17:04.679730   25198 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 18:17:04.701351   25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:17:04.763557   25198 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 18:17:05.030253   25198 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 18:17:05.055842   25198 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 18:17:05.103486   25198 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1003 18:17:05.103634   25198 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-022000 dig +short host.docker.internal
	I1003 18:17:05.226091   25198 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1003 18:17:05.226184   25198 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1003 18:17:05.231326   25198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:17:05.243309   25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:17:05.295581   25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 18:17:05.295665   25198 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 18:17:05.316480   25198 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1003 18:17:05.316507   25198 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 18:17:05.316569   25198 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 18:17:05.326203   25198 ssh_runner.go:195] Run: which lz4
	I1003 18:17:05.330780   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1003 18:17:05.330916   25198 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 18:17:05.335291   25198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 18:17:05.335314   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1003 18:17:10.778726   25198 docker.go:628] Took 5.447849 seconds to copy over tarball
	I1003 18:17:10.778806   25198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 18:17:12.773852   25198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.995022473s)
	I1003 18:17:12.773867   25198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 18:17:12.827513   25198 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 18:17:12.837135   25198 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1003 18:17:12.854050   25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:17:12.907226   25198 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 18:17:14.017184   25198 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.109928566s)
	I1003 18:17:14.017285   25198 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 18:17:14.037452   25198 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1003 18:17:14.037468   25198 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 18:17:14.037482   25198 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 18:17:14.043566   25198 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:17:14.043615   25198 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:17:14.043747   25198 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:17:14.043812   25198 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:17:14.044151   25198 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:17:14.044424   25198 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:17:14.044645   25198 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1003 18:17:14.044699   25198 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1003 18:17:14.049489   25198 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:17:14.050669   25198 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:17:14.051048   25198 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:17:14.051089   25198 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:17:14.051350   25198 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:17:14.051371   25198 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:17:14.054013   25198 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1003 18:17:14.054181   25198 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1003 18:17:14.708734   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1003 18:17:14.729631   25198 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1003 18:17:14.729666   25198 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1003 18:17:14.729734   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1003 18:17:14.750236   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1003 18:17:15.184770   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:17:15.205615   25198 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1003 18:17:15.205641   25198 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:17:15.205690   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1003 18:17:15.227400   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1003 18:17:15.254313   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:17:15.491521   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:17:15.512091   25198 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1003 18:17:15.512117   25198 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:17:15.512166   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 18:17:15.533031   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1003 18:17:15.803671   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:17:15.824616   25198 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1003 18:17:15.824642   25198 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:17:15.824699   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1003 18:17:15.844939   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1003 18:17:16.098696   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:17:16.120109   25198 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1003 18:17:16.120134   25198 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:17:16.120189   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1003 18:17:16.140909   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1003 18:17:16.416800   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1003 18:17:16.437931   25198 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1003 18:17:16.437971   25198 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1003 18:17:16.438025   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1003 18:17:16.458630   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1003 18:17:16.754937   25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1003 18:17:16.776851   25198 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1003 18:17:16.776877   25198 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1003 18:17:16.776944   25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1003 18:17:16.796897   25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1003 18:17:16.796939   25198 cache_images.go:92] LoadImages completed in 2.759434173s
	W1003 18:17:16.796986   25198 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1003 18:17:16.797062   25198 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 18:17:16.851199   25198 cni.go:84] Creating CNI manager for ""
	I1003 18:17:16.851215   25198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 18:17:16.851230   25198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 18:17:16.851247   25198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-022000 NodeName:ingress-addon-legacy-022000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1003 18:17:16.851364   25198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-022000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:17:16.851433   25198 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-022000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 18:17:16.851503   25198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1003 18:17:16.861230   25198 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:17:16.861283   25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:17:16.870695   25198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1003 18:17:16.887971   25198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1003 18:17:16.905789   25198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1003 18:17:16.922989   25198 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:17:16.927754   25198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:17:16.939430   25198 certs.go:56] Setting up /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000 for IP: 192.168.49.2
	I1003 18:17:16.939453   25198 certs.go:190] acquiring lock for shared ca certs: {Name:mkadefe5d54c46ee473565278d437df4894e94b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:16.939639   25198 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key
	I1003 18:17:16.939697   25198 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key
	I1003 18:17:16.939746   25198 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key
	I1003 18:17:16.939759   25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt with IP's: []
	I1003 18:17:16.978781   25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt ...
	I1003 18:17:16.978794   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt: {Name:mk6f567f53ff613362aaff9d5ce6fe5f16cdaf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:16.979136   25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key ...
	I1003 18:17:16.979145   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key: {Name:mk522dc078d0ee34b187de1b6bdfd0a1d23e4c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:16.979388   25198 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2
	I1003 18:17:16.979404   25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 18:17:17.085662   25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 ...
	I1003 18:17:17.085671   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2: {Name:mkb4f2e32dfb3a804841d143820286dd7389e5ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:17.085922   25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2 ...
	I1003 18:17:17.085930   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2: {Name:mka2a24b441d60e88bb44c5006adb92431c10e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:17.086124   25198 certs.go:337] copying /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt
	I1003 18:17:17.086306   25198 certs.go:341] copying /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key
	I1003 18:17:17.086483   25198 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key
	I1003 18:17:17.086496   25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt with IP's: []
	I1003 18:17:17.192561   25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt ...
	I1003 18:17:17.192570   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt: {Name:mkc14c9e12b1678a9d4c0469dc5082d5ecda6bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:17.192809   25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key ...
	I1003 18:17:17.192817   25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key: {Name:mk624f3883ac4fd328415fd64824fc4487304d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:17:17.193020   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 18:17:17.193046   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 18:17:17.193063   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 18:17:17.193078   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 18:17:17.193098   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 18:17:17.193114   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 18:17:17.193129   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 18:17:17.193151   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 18:17:17.193238   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem (1338 bytes)
	W1003 18:17:17.193287   25198 certs.go:433] ignoring /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318_empty.pem, impossibly tiny 0 bytes
	I1003 18:17:17.193296   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem (1675 bytes)
	I1003 18:17:17.193327   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem (1078 bytes)
	I1003 18:17:17.193355   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:17:17.193391   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem (1675 bytes)
	I1003 18:17:17.193459   25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem (1708 bytes)
	I1003 18:17:17.193490   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:17:17.193515   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem -> /usr/share/ca-certificates/22318.pem
	I1003 18:17:17.193532   25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> /usr/share/ca-certificates/223182.pem
	I1003 18:17:17.193989   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 18:17:17.217741   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:17:17.240463   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:17:17.263741   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 18:17:17.286942   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:17:17.310574   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 18:17:17.333484   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:17:17.356921   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:17:17.380381   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:17:17.403862   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem --> /usr/share/ca-certificates/22318.pem (1338 bytes)
	I1003 18:17:17.426663   25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem --> /usr/share/ca-certificates/223182.pem (1708 bytes)
	I1003 18:17:17.449699   25198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:17:17.467348   25198 ssh_runner.go:195] Run: openssl version
	I1003 18:17:17.473670   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:17:17.483877   25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:17:17.488355   25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 01:07 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:17:17.488402   25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:17:17.495418   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:17:17.505756   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22318.pem && ln -fs /usr/share/ca-certificates/22318.pem /etc/ssl/certs/22318.pem"
	I1003 18:17:17.515862   25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22318.pem
	I1003 18:17:17.520502   25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 01:12 /usr/share/ca-certificates/22318.pem
	I1003 18:17:17.520564   25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22318.pem
	I1003 18:17:17.527505   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22318.pem /etc/ssl/certs/51391683.0"
	I1003 18:17:17.537448   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/223182.pem && ln -fs /usr/share/ca-certificates/223182.pem /etc/ssl/certs/223182.pem"
	I1003 18:17:17.547644   25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/223182.pem
	I1003 18:17:17.552476   25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 01:12 /usr/share/ca-certificates/223182.pem
	I1003 18:17:17.552532   25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/223182.pem
	I1003 18:17:17.559469   25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/223182.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:17:17.569492   25198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 18:17:17.574117   25198 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 18:17:17.574162   25198 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:17:17.574257   25198 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:17:17.594128   25198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:17:17.604116   25198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:17:17.613560   25198 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:17:17.613615   25198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:17:17.622881   25198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:17:17.622913   25198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:17:17.674961   25198 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1003 18:17:17.675020   25198 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 18:17:17.924971   25198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:17:17.925058   25198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:17:17.925161   25198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 18:17:18.112344   25198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:17:18.113210   25198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:17:18.113243   25198 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 18:17:18.200658   25198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:17:18.222337   25198 out.go:204]   - Generating certificates and keys ...
	I1003 18:17:18.222414   25198 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 18:17:18.222474   25198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 18:17:18.540190   25198 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:17:18.635670   25198 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:17:18.762271   25198 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:17:18.973246   25198 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 18:17:19.042381   25198 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 18:17:19.042515   25198 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:17:19.096707   25198 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 18:17:19.096844   25198 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:17:19.155292   25198 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:17:19.323478   25198 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:17:19.410712   25198 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 18:17:19.410821   25198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:17:19.610306   25198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:17:19.705894   25198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:17:19.925086   25198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:17:20.131727   25198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:17:20.132236   25198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:17:20.153782   25198 out.go:204]   - Booting up control plane ...
	I1003 18:17:20.153954   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:17:20.154082   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:17:20.154204   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:17:20.154360   25198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:17:20.154607   25198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 18:18:00.141467   25198 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1003 18:18:00.142944   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:18:00.143172   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:18:05.144150   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:18:05.144384   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:18:15.146101   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:18:15.146331   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:18:35.147365   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:18:35.147604   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:19:15.150240   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:19:15.150580   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:19:15.150604   25198 kubeadm.go:322] 
	I1003 18:19:15.150680   25198 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1003 18:19:15.150748   25198 kubeadm.go:322] 		timed out waiting for the condition
	I1003 18:19:15.150759   25198 kubeadm.go:322] 
	I1003 18:19:15.150792   25198 kubeadm.go:322] 	This error is likely caused by:
	I1003 18:19:15.150823   25198 kubeadm.go:322] 		- The kubelet is not running
	I1003 18:19:15.150961   25198 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1003 18:19:15.150982   25198 kubeadm.go:322] 
	I1003 18:19:15.151220   25198 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1003 18:19:15.151296   25198 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1003 18:19:15.151401   25198 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1003 18:19:15.151419   25198 kubeadm.go:322] 
	I1003 18:19:15.151539   25198 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1003 18:19:15.151639   25198 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:19:15.151650   25198 kubeadm.go:322] 
	I1003 18:19:15.151722   25198 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1003 18:19:15.151768   25198 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1003 18:19:15.151835   25198 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1003 18:19:15.151865   25198 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1003 18:19:15.151873   25198 kubeadm.go:322] 
	I1003 18:19:15.154040   25198 kubeadm.go:322] W1004 01:17:17.674141    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1003 18:19:15.154256   25198 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1003 18:19:15.154326   25198 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1003 18:19:15.154442   25198 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1003 18:19:15.154532   25198 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:19:15.154625   25198 kubeadm.go:322] W1004 01:17:20.136615    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:19:15.154713   25198 kubeadm.go:322] W1004 01:17:20.137673    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:19:15.154775   25198 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1003 18:19:15.154850   25198 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1003 18:19:15.154921   25198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:17:17.674141    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:17:20.136615    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:17:20.137673    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:17:17.674141    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:17:20.136615    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:17:20.137673    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 18:19:15.154955   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1003 18:19:15.571581   25198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:19:15.583548   25198 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:19:15.583602   25198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:19:15.592956   25198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:19:15.592981   25198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:19:15.645954   25198 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1003 18:19:15.646023   25198 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 18:19:15.899579   25198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:19:15.899685   25198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:19:15.899762   25198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 18:19:16.085606   25198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:19:16.086432   25198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:19:16.086492   25198 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 18:19:16.178086   25198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:19:16.199695   25198 out.go:204]   - Generating certificates and keys ...
	I1003 18:19:16.199758   25198 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 18:19:16.199822   25198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 18:19:16.199875   25198 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 18:19:16.199942   25198 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1003 18:19:16.200016   25198 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 18:19:16.200094   25198 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1003 18:19:16.200190   25198 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1003 18:19:16.200277   25198 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1003 18:19:16.200368   25198 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 18:19:16.200437   25198 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 18:19:16.200471   25198 kubeadm.go:322] [certs] Using the existing "sa" key
	I1003 18:19:16.200524   25198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:19:16.332251   25198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:19:16.390500   25198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:19:16.515656   25198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:19:16.659205   25198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:19:16.659704   25198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:19:16.681517   25198 out.go:204]   - Booting up control plane ...
	I1003 18:19:16.681670   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:19:16.681825   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:19:16.681963   25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:19:16.682112   25198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:19:16.682440   25198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 18:19:56.669885   25198 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1003 18:19:56.670502   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:19:56.670761   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:20:01.671493   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:20:01.671744   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:20:11.673163   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:20:11.673396   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:20:31.673795   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:20:31.673951   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:21:11.676813   25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1003 18:21:11.677089   25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1003 18:21:11.677102   25198 kubeadm.go:322] 
	I1003 18:21:11.677143   25198 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1003 18:21:11.677208   25198 kubeadm.go:322] 		timed out waiting for the condition
	I1003 18:21:11.677240   25198 kubeadm.go:322] 
	I1003 18:21:11.677310   25198 kubeadm.go:322] 	This error is likely caused by:
	I1003 18:21:11.677363   25198 kubeadm.go:322] 		- The kubelet is not running
	I1003 18:21:11.677541   25198 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1003 18:21:11.677555   25198 kubeadm.go:322] 
	I1003 18:21:11.677664   25198 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1003 18:21:11.677712   25198 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1003 18:21:11.677781   25198 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1003 18:21:11.677807   25198 kubeadm.go:322] 
	I1003 18:21:11.677933   25198 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1003 18:21:11.678031   25198 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 18:21:11.678051   25198 kubeadm.go:322] 
	I1003 18:21:11.678139   25198 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1003 18:21:11.678200   25198 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1003 18:21:11.678275   25198 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1003 18:21:11.678304   25198 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1003 18:21:11.678308   25198 kubeadm.go:322] 
	I1003 18:21:11.680008   25198 kubeadm.go:322] W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1003 18:21:11.680160   25198 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1003 18:21:11.680221   25198 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1003 18:21:11.680326   25198 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1003 18:21:11.680401   25198 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:21:11.680498   25198 kubeadm.go:322] W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:21:11.680607   25198 kubeadm.go:322] W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 18:21:11.680671   25198 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1003 18:21:11.680754   25198 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1003 18:21:11.680785   25198 kubeadm.go:406] StartCluster complete in 3m54.10430272s
	I1003 18:21:11.680876   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 18:21:11.700816   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.700832   25198 logs.go:286] No container was found matching "kube-apiserver"
	I1003 18:21:11.700909   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 18:21:11.720611   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.720623   25198 logs.go:286] No container was found matching "etcd"
	I1003 18:21:11.720689   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 18:21:11.741371   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.741384   25198 logs.go:286] No container was found matching "coredns"
	I1003 18:21:11.741452   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 18:21:11.761584   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.761598   25198 logs.go:286] No container was found matching "kube-scheduler"
	I1003 18:21:11.761673   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 18:21:11.782175   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.782188   25198 logs.go:286] No container was found matching "kube-proxy"
	I1003 18:21:11.782268   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 18:21:11.802707   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.802723   25198 logs.go:286] No container was found matching "kube-controller-manager"
	I1003 18:21:11.802831   25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 18:21:11.823031   25198 logs.go:284] 0 containers: []
	W1003 18:21:11.823046   25198 logs.go:286] No container was found matching "kindnet"
	I1003 18:21:11.823061   25198 logs.go:123] Gathering logs for kubelet ...
	I1003 18:21:11.823069   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 18:21:11.861029   25198 logs.go:123] Gathering logs for dmesg ...
	I1003 18:21:11.861043   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 18:21:11.875127   25198 logs.go:123] Gathering logs for describe nodes ...
	I1003 18:21:11.875141   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 18:21:11.931996   25198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 18:21:11.932019   25198 logs.go:123] Gathering logs for Docker ...
	I1003 18:21:11.932025   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 18:21:11.949079   25198 logs.go:123] Gathering logs for container status ...
	I1003 18:21:11.949093   25198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 18:21:12.004355   25198 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1003 18:21:12.004377   25198 out.go:239] * 
	* 
	W1003 18:21:12.004431   25198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:21:12.004457   25198 out.go:239] * 
	* 
	W1003 18:21:12.005081   25198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:21:12.069742   25198 out.go:177] 
	W1003 18:21:12.111781   25198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1004 01:19:15.645248    4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1004 01:19:16.664459    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1004 01:19:16.665222    4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 18:21:12.111823   25198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1003 18:21:12.111847   25198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1003 18:21:12.153760   25198 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (262.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (79.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-022000 addons enable ingress --alsologtostderr -v=5
E1003 18:22:07.803578   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-022000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m19.218855177s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:21:12.300978   25491 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:21:12.301451   25491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:21:12.301456   25491 out.go:309] Setting ErrFile to fd 2...
	I1003 18:21:12.301460   25491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:21:12.301649   25491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	I1003 18:21:12.302276   25491 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:21:12.302296   25491 addons.go:594] checking whether the cluster is paused
	I1003 18:21:12.302373   25491 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:21:12.302394   25491 host.go:66] Checking if "ingress-addon-legacy-022000" exists ...
	I1003 18:21:12.302827   25491 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:21:12.354255   25491 ssh_runner.go:195] Run: systemctl --version
	I1003 18:21:12.354344   25491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:21:12.406084   25491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:21:12.495498   25491 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:21:12.537069   25491 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1003 18:21:12.557963   25491 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:21:12.557984   25491 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-022000"
	I1003 18:21:12.557993   25491 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-022000"
	I1003 18:21:12.558045   25491 host.go:66] Checking if "ingress-addon-legacy-022000" exists ...
	I1003 18:21:12.558447   25491 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:21:12.630937   25491 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1003 18:21:12.653014   25491 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1003 18:21:12.673739   25491 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1003 18:21:12.696096   25491 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1003 18:21:12.718291   25491 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:21:12.718320   25491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1003 18:21:12.718452   25491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:21:12.770640   25491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:21:12.871367   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:12.928630   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:12.928659   25491 retry.go:31] will retry after 301.242227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:13.232271   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:13.287572   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:13.287597   25491 retry.go:31] will retry after 366.645162ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:13.655318   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:13.712177   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:13.712201   25491 retry.go:31] will retry after 692.126641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:14.405181   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:14.459874   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:14.459894   25491 retry.go:31] will retry after 910.474952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:15.371732   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:15.429027   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:15.429047   25491 retry.go:31] will retry after 1.880477924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:17.310697   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:17.381699   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:17.381719   25491 retry.go:31] will retry after 2.370453613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:19.754171   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:19.810912   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:19.810932   25491 retry.go:31] will retry after 1.819904314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:21.633144   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:21.689896   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:21.689928   25491 retry.go:31] will retry after 2.558919368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:24.249480   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:24.306953   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:24.306978   25491 retry.go:31] will retry after 9.598759524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:33.906112   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:33.961402   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:33.961421   25491 retry.go:31] will retry after 13.441797319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:47.403617   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:21:47.460297   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:21:47.460318   25491 retry.go:31] will retry after 19.935641514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:07.397128   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:22:07.453997   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:07.454015   25491 retry.go:31] will retry after 23.849240052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:31.305593   25491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1003 18:22:31.363005   25491 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:31.363036   25491 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-022000"
	I1003 18:22:31.384616   25491 out.go:177] * Verifying ingress addon...
	I1003 18:22:31.407674   25491 out.go:177] 
	W1003 18:22:31.429750   25491 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-022000" does not exist: client config: context "ingress-addon-legacy-022000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-022000" does not exist: client config: context "ingress-addon-legacy-022000" does not exist]
	W1003 18:22:31.429779   25491 out.go:239] * 
	* 
	W1003 18:22:31.434752   25491 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:22:31.456660   25491 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-022000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-022000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62",
	        "Created": "2023-10-04T01:17:00.237767501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:17:00.473655014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4be5e4437dc6caee1cf05a235e18cf959ee382af7eb38951ea71e5ff2b62d458",
	        "ResolvConfPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hosts",
	        "LogPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62-json.log",
	        "Name": "/ingress-addon-legacy-022000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-022000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-022000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031-init/diff:/var/lib/docker/overlay2/1870aae2df735f8bb761fd42fce33e7379805ae08cbf6efc89ec69910dae4b59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-022000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-022000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-022000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17ef7e5da6b7d491dea84ca3ad7210299eb408769e618e70502c150ec9c5c410",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56379"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56378"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/17ef7e5da6b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-022000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45c7d0832b43",
	                        "ingress-addon-legacy-022000"
	                    ],
	                    "NetworkID": "72ff8820f8a2d8a3fb8e4d5cec9e504c131207cb51dadbc9ce0379124dedd7ee",
	                    "EndpointID": "0f59a09bd48eb2290828a7e20377dcd8e27c3c18f2eff5adbde525c80d00256a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000: exit status 6 (371.382119ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:22:31.895546   25524 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-022000" does not appear in /Users/jenkins/minikube-integration/17348-21848/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-022000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (79.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (104.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-022000 addons enable ingress-dns --alsologtostderr -v=5
E1003 18:24:09.445393   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-022000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m44.158706602s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:22:31.948484   25534 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:22:31.948801   25534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:22:31.948806   25534 out.go:309] Setting ErrFile to fd 2...
	I1003 18:22:31.948810   25534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:22:31.948983   25534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	I1003 18:22:31.949616   25534 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:22:31.949632   25534 addons.go:594] checking whether the cluster is paused
	I1003 18:22:31.949716   25534 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:22:31.949735   25534 host.go:66] Checking if "ingress-addon-legacy-022000" exists ...
	I1003 18:22:31.950160   25534 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:22:32.001351   25534 ssh_runner.go:195] Run: systemctl --version
	I1003 18:22:32.001443   25534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:22:32.051485   25534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:22:32.141200   25534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 18:22:32.181040   25534 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1003 18:22:32.202859   25534 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 18:22:32.202885   25534 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-022000"
	I1003 18:22:32.202897   25534 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-022000"
	I1003 18:22:32.202947   25534 host.go:66] Checking if "ingress-addon-legacy-022000" exists ...
	I1003 18:22:32.203508   25534 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
	I1003 18:22:32.275928   25534 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1003 18:22:32.297884   25534 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1003 18:22:32.319703   25534 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:22:32.319724   25534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1003 18:22:32.319845   25534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
	I1003 18:22:32.372569   25534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
	I1003 18:22:32.475329   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:32.529270   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:32.529302   25534 retry.go:31] will retry after 175.849614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:32.705917   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:32.763723   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:32.763753   25534 retry.go:31] will retry after 355.596815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:33.119550   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:33.178348   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:33.178368   25534 retry.go:31] will retry after 339.598285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:33.520256   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:33.576663   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:33.576680   25534 retry.go:31] will retry after 753.726316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:34.332802   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:34.389143   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:34.389171   25534 retry.go:31] will retry after 1.055423832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:35.445538   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:35.501981   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:35.501998   25534 retry.go:31] will retry after 1.186056988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:36.689257   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:36.746625   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:36.746644   25534 retry.go:31] will retry after 3.579941922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:40.327122   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:40.384199   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:40.384220   25534 retry.go:31] will retry after 6.212471374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:46.597640   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:46.656684   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:46.656704   25534 retry.go:31] will retry after 8.895110863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:55.552121   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:22:55.607944   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:22:55.607963   25534 retry.go:31] will retry after 13.558666001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:09.167213   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:23:09.225315   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:09.225336   25534 retry.go:31] will retry after 10.9750179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:20.202264   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:23:20.259048   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:20.259066   25534 retry.go:31] will retry after 21.377778014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:41.638799   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:23:41.696114   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:23:41.696132   25534 retry.go:31] will retry after 34.224483565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:24:15.921530   25534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1003 18:24:15.975979   25534 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1003 18:24:15.997945   25534 out.go:177] 
	W1003 18:24:16.019434   25534 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1003 18:24:16.019465   25534 out.go:239] * 
	* 
	W1003 18:24:16.024238   25534 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:24:16.045604   25534 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-022000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-022000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62",
	        "Created": "2023-10-04T01:17:00.237767501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:17:00.473655014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4be5e4437dc6caee1cf05a235e18cf959ee382af7eb38951ea71e5ff2b62d458",
	        "ResolvConfPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hosts",
	        "LogPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62-json.log",
	        "Name": "/ingress-addon-legacy-022000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-022000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-022000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031-init/diff:/var/lib/docker/overlay2/1870aae2df735f8bb761fd42fce33e7379805ae08cbf6efc89ec69910dae4b59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-022000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-022000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-022000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17ef7e5da6b7d491dea84ca3ad7210299eb408769e618e70502c150ec9c5c410",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56379"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56378"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/17ef7e5da6b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-022000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45c7d0832b43",
	                        "ingress-addon-legacy-022000"
	                    ],
	                    "NetworkID": "72ff8820f8a2d8a3fb8e4d5cec9e504c131207cb51dadbc9ce0379124dedd7ee",
	                    "EndpointID": "0f59a09bd48eb2290828a7e20377dcd8e27c3c18f2eff5adbde525c80d00256a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000: exit status 6 (367.150526ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:24:16.478983   25570 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-022000" does not appear in /Users/jenkins/minikube-integration/17348-21848/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-022000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (104.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:179: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-022000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-022000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62",
	        "Created": "2023-10-04T01:17:00.237767501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-04T01:17:00.473655014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4be5e4437dc6caee1cf05a235e18cf959ee382af7eb38951ea71e5ff2b62d458",
	        "ResolvConfPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hostname",
	        "HostsPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/hosts",
	        "LogPath": "/var/lib/docker/containers/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62/45c7d0832b43e666774ae026157e040d828cde20600a9a5fb581346dea677b62-json.log",
	        "Name": "/ingress-addon-legacy-022000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-022000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-022000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031-init/diff:/var/lib/docker/overlay2/1870aae2df735f8bb761fd42fce33e7379805ae08cbf6efc89ec69910dae4b59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbb94dc2d8d63bf9dd2c1137ae2a7172bbebb09bf92f34b186f0761cfd6a0031/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-022000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-022000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-022000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-022000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17ef7e5da6b7d491dea84ca3ad7210299eb408769e618e70502c150ec9c5c410",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56379"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56378"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/17ef7e5da6b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-022000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45c7d0832b43",
	                        "ingress-addon-legacy-022000"
	                    ],
	                    "NetworkID": "72ff8820f8a2d8a3fb8e4d5cec9e504c131207cb51dadbc9ce0379124dedd7ee",
	                    "EndpointID": "0f59a09bd48eb2290828a7e20377dcd8e27c3c18f2eff5adbde525c80d00256a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-022000 -n ingress-addon-legacy-022000: exit status 6 (368.916927ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 18:24:16.901228   25582 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-022000" does not appear in /Users/jenkins/minikube-integration/17348-21848/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-022000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7200.703s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-341000
E1003 18:29:09.449952   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:29:23.958183   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:30:32.498152   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:34:09.456953   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:34:23.965863   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:35:47.023969   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:39:09.467256   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:39:23.976582   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-341000: signal: killed (14m40.571850278s)

                                                
                                                
-- stdout --
	* [mount-start-2-341000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-341000
	* Pulling base image ...
	* Restarting existing docker container for "mount-start-2-341000" ...

                                                
                                                
-- /stdout --
mount_start_test.go:168: restart failed: "out/minikube-darwin-amd64 start -p mount-start-2-341000" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/RestartStopped]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-341000
E1003 18:44:09.477734   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:44:23.987913   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:47:12.533280   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:49:09.490355   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:49:23.998422   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:52:27.103259   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:54:09.545852   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:54:24.055251   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:59:09.561891   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:59:24.069938   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:03:52.625619   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:04:09.576762   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:04:24.085470   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:09:07.159081   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:09:09.593621   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:09:24.102049   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:14:09.609768   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:14:24.118525   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:19:09.625171   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:19:24.135739   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:20:32.681509   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:24:09.641222   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:24:24.150218   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:25:47.213998   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:29:09.657738   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:29:24.168270   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:34:09.673823   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:34:24.184724   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:37:12.738097   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:39:09.691076   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:39:24.199008   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:42:27.269683   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:44:09.706497   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:44:24.217264   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:49:09.722696   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:49:24.233641   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:53:52.795229   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:54:09.740214   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:54:24.248720   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:59:07.326016   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 19:59:09.755593   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 19:59:24.266347   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 20:04:09.772572   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 20:04:24.280833   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestMountStart (1h38m10s)
	TestMountStart/serial (1h38m10s)
	TestMountStart/serial/RestartStopped (1h37m51s)

                                                
                                                
goroutine 1460 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 98 minutes]:
testing.(*T).Run(0xc000702d00, {0x31176aa?, 0x5375f348891?}, 0x34d5fb0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
testing.runTests.func1(0x4c40cc0?)
	/usr/local/go/src/testing/testing.go:2054 +0x3e
testing.tRunner(0xc000702d00, 0xc0009afb80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
testing.runTests(0xc000a6c640?, {0x4c1dc80, 0x2a, 0x2a}, {0x10b00a5?, 0xc000068180?, 0x4c3f380?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc000a6c640)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0004d7500)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 126 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0001273c0, 0xc000064300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 115
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1422 [chan receive, 98 minutes]:
testing.(*T).Run(0xc00125c000, {0x3105336?, 0x5ad?}, 0xc000e52540)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestMountStart(0xc00125c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:57 +0x26d
testing.tRunner(0xc00125c000, 0x34d5fb0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 130 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc0005a2d00)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc0005a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc0005a2d00, 0x34d5fd0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 482 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f8b60)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0012f8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0012f8b60, 0x34d5ee0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 125 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009def00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 115
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 36 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1141 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 35
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1423 [chan receive, 98 minutes]:
testing.(*T).Run(0xc00125c4e0, {0x31178a2?, 0x453e0d0?}, 0xc0006151c0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestMountStart.func1(0xc00125c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:82 +0x19a
testing.tRunner(0xc00125c4e0, 0xc000e52540)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1422
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 486 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f91e0)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0012f91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc0012f91e0, 0x34d5f18)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 483 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f8d00)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0012f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0012f8d00, 0x34d5ed8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 950 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00155db80, 0xc0015dc5a0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 949
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 542 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x4c4851c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00056cb80?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00056cb80)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00056cb80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001379ba0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc001379ba0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0000b11d0, {0x393c820, 0xc001379ba0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0000b11d0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0001031e0?, 0xc0001031e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 539
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x13a

                                                
                                                
goroutine 128 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000127250, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009dede0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0001273c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3926ee0, 0xc0008f7e60}, 0x1, 0xc000064300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0001107d0?, 0x15d6885?, 0xc0009def00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 126
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 145 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000064300}, 0xc000a75f50, 0x5?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000064300}, 0x0?, 0x0?, 0xc0000b53f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000064300?}, 0xc0006036c0?, 0x1137540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x13cb385?, 0xc0006036c0?, 0xc0008cc580?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 126
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 146 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 145
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 731 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0011e2e50, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000ce6a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0011e2e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3926ee0, 0xc000840f60}, 0x1, 0xc000064300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0001137d0?, 0x15d6885?, 0xc000ce6cc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 742
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 485 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f9040)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0012f9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc0012f9040, 0x34d5f20)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 490 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f9860)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc0012f9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc0012f9860, 0x34d5f40)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 489 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f96c0)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc0012f96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc0012f96c0, 0x34d5f38)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 918 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015b49a0, 0xc00141dc80)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 917
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 484 [chan receive, 115 minutes]:
testing.(*T).Parallel(0xc0012f8ea0)
	/usr/local/go/src/testing/testing.go:1403 +0x205
k8s.io/minikube/test/integration.MaybeParallel(0xc0012f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0012f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0012f8ea0, 0x34d5ef0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1456 [syscall, 84 minutes]:
syscall.syscall6(0x1010585?, 0xc0009d3648?, 0xc0009d3538?, 0xc0009d3668?, 0x100c0009d3630?, 0x1000000000003?, 0x537a070?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0009d35e0?, 0x1010905?, 0x90?, 0x3071ea0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc001604200?, 0xc0009d3614, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000a3e660)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a62000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc00125d6c0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc00125d6c0, 0xc000a62000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.PostMortemLogs(0xc00125d6c0, {0xc00145e258, 0x14}, {0x0, 0x0, 0x113466a?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:231 +0x3b5
runtime.Goexit()
	/usr/local/go/src/runtime/panic.go:523 +0x145
testing.(*common).FailNow(0xc00125d6c0)
	/usr/local/go/src/testing/testing.go:999 +0x4a
testing.(*common).Fatalf(0xc00125d6c0, {0x3131158?, 0xffffffffffffffff?}, {0xc000e5fea0?, 0x4?, 0x4?})
	/usr/local/go/src/testing/testing.go:1083 +0x5e
k8s.io/minikube/test/integration.validateRestart({0x3949118?, 0xc00044e070?}, 0xc00125d6c0, {0xc00145e258?, 0x1094f4f?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:168 +0x20b
k8s.io/minikube/test/integration.TestMountStart.func1.1(0xc00125c4e0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:83 +0x35
testing.tRunner(0xc00125d6c0, 0xc0006151c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1423
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 747 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ebcc60, 0xc0012e1ec0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 746
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 732 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000064300}, 0xc000dec750, 0xc0006affd8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000064300}, 0x1?, 0x1?, 0xc000dec7b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000064300?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000dec7d0?, 0x117c287?, 0xc000e52b80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 742
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1025 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc000a48480)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1011
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 733 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 732
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 990 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00066ec60, 0xc0015dcc60)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 742 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0011e2e80, 0xc000064300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 682
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 741 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000ce6cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 682
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 992 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc000a48480)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1011
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 1474 [IO wait, 84 minutes]:
internal/poll.runtime_pollWait(0x4c484ce8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000f944e0?, 0xc00014be00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000f944e0, {0xc00014be00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0010120d8, {0xc00014be00?, 0xc0009e20d0?, 0xc000113668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013a6510, {0x39259c0, 0xc0010120d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc0013a6510}, {0x39259c0, 0xc0010120d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x34d5f50?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1456
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1473 [IO wait, 84 minutes]:
internal/poll.runtime_pollWait(0x4cccf8d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000f94360?, 0xc00014b000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000f94360, {0xc00014b000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001012090, {0xc00014b000?, 0xc000113e68?, 0xc000113e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013a64e0, {0x39259c0, 0xc001012090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc0013a64e0}, {0x39259c0, 0xc001012090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00141c420?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1456
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                    

Test pass (132/153)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.29
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.43
10 TestDownloadOnly/v1.28.2/json-events 10.11
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 2.01
19 TestBinaryMirror 1.61
22 TestAddons/Setup 149.85
26 TestAddons/parallel/InspektorGadget 10.82
27 TestAddons/parallel/MetricsServer 5.83
28 TestAddons/parallel/HelmTiller 11.09
30 TestAddons/parallel/CSI 43.26
31 TestAddons/parallel/Headlamp 13.5
32 TestAddons/parallel/CloudSpanner 5.99
33 TestAddons/parallel/LocalPath 52.8
36 TestAddons/serial/GCPAuth/Namespaces 0.11
37 TestAddons/StoppedEnableDisable 11.78
48 TestErrorSpam/setup 22.54
49 TestErrorSpam/start 2.01
50 TestErrorSpam/status 1.17
51 TestErrorSpam/pause 1.65
52 TestErrorSpam/unpause 1.78
53 TestErrorSpam/stop 11.42
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 37.45
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 35.88
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.08
64 TestFunctional/serial/CacheCmd/cache/add_remote 4.93
65 TestFunctional/serial/CacheCmd/cache/add_local 1.69
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
67 TestFunctional/serial/CacheCmd/cache/list 0.06
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.3
70 TestFunctional/serial/CacheCmd/cache/delete 0.13
71 TestFunctional/serial/MinikubeKubectlCmd 0.53
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.73
73 TestFunctional/serial/ExtraConfig 40.26
74 TestFunctional/serial/ComponentHealth 0.06
75 TestFunctional/serial/LogsCmd 3.17
76 TestFunctional/serial/LogsFileCmd 3.14
77 TestFunctional/serial/InvalidService 4.09
79 TestFunctional/parallel/ConfigCmd 0.41
80 TestFunctional/parallel/DashboardCmd 13.21
81 TestFunctional/parallel/DryRun 1.35
82 TestFunctional/parallel/InternationalLanguage 0.61
83 TestFunctional/parallel/StatusCmd 1.17
88 TestFunctional/parallel/AddonsCmd 0.23
89 TestFunctional/parallel/PersistentVolumeClaim 29.18
91 TestFunctional/parallel/SSHCmd 0.74
92 TestFunctional/parallel/CpCmd 1.56
93 TestFunctional/parallel/MySQL 32.58
94 TestFunctional/parallel/FileSync 0.38
95 TestFunctional/parallel/CertSync 2.41
99 TestFunctional/parallel/NodeLabels 0.05
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
103 TestFunctional/parallel/License 0.28
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.19
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
115 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
117 TestFunctional/parallel/ProfileCmd/profile_list 0.52
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
119 TestFunctional/parallel/MountCmd/any-port 7.5
120 TestFunctional/parallel/ServiceCmd/List 0.67
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
122 TestFunctional/parallel/ServiceCmd/HTTPS 15
123 TestFunctional/parallel/MountCmd/specific-port 2.18
124 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
125 TestFunctional/parallel/ServiceCmd/Format 15
126 TestFunctional/parallel/Version/short 0.09
127 TestFunctional/parallel/Version/components 0.67
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.62
133 TestFunctional/parallel/ImageCommands/Setup 2.39
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.06
135 TestFunctional/parallel/ServiceCmd/URL 15
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.32
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.44
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.24
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.94
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.35
142 TestFunctional/parallel/DockerEnv/bash 1.56
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
146 TestFunctional/delete_addon-resizer_images 0.14
147 TestFunctional/delete_my-image_image 0.05
148 TestFunctional/delete_minikube_cached_images 0.05
152 TestImageBuild/serial/Setup 22.31
153 TestImageBuild/serial/NormalBuild 1.69
154 TestImageBuild/serial/BuildWithBuildArg 0.98
155 TestImageBuild/serial/BuildWithDockerIgnore 0.77
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
166 TestJSONOutput/start/Command 35.16
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.62
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.6
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 5.87
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.76
191 TestKicCustomNetwork/create_custom_network 24.04
192 TestKicCustomNetwork/use_default_bridge_network 24.03
193 TestKicExistingNetwork 24.1
194 TestKicCustomSubnet 24.34
195 TestKicStaticIP 24.51
196 TestMainNoArgs 0.07
197 TestMinikubeProfile 50.79
x
+
TestDownloadOnly/v1.16.0/json-events (17.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-991000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-991000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (17.285486644s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-991000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-991000: exit status 85 (428.691107ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-991000 | jenkins | v1.31.2 | 03 Oct 23 18:06 PDT |          |
	|         | -p download-only-991000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 18:06:06
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:06:06.481520   22320 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:06:06.481842   22320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:06.481847   22320 out.go:309] Setting ErrFile to fd 2...
	I1003 18:06:06.481851   22320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:06.482017   22320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	W1003 18:06:06.482125   22320 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17348-21848/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17348-21848/.minikube/config/config.json: no such file or directory
	I1003 18:06:06.484071   22320 out.go:303] Setting JSON to true
	I1003 18:06:06.506838   22320 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5735,"bootTime":1696375831,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 18:06:06.506966   22320 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:06:06.531075   22320 out.go:97] [download-only-991000] minikube v1.31.2 on Darwin 14.0
	I1003 18:06:06.553161   22320 out.go:169] MINIKUBE_LOCATION=17348
	I1003 18:06:06.531325   22320 notify.go:220] Checking for updates...
	W1003 18:06:06.531333   22320 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 18:06:06.597264   22320 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	I1003 18:06:06.639227   22320 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:06:06.681180   22320 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:06:06.725001   22320 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	W1003 18:06:06.768365   22320 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 18:06:06.768897   22320 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:06:06.826593   22320 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:06:06.826707   22320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:06:06.928016   22320 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:06:06.91755234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:06:06.949649   22320 out.go:97] Using the docker driver based on user configuration
	I1003 18:06:06.949668   22320 start.go:298] selected driver: docker
	I1003 18:06:06.949677   22320 start.go:902] validating driver "docker" against <nil>
	I1003 18:06:06.949780   22320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:06:07.052904   22320 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:06:07.040682256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:06:07.053071   22320 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 18:06:07.056431   22320 start_flags.go:384] Using suggested 5891MB memory alloc based on sys=32768MB, container=5939MB
	I1003 18:06:07.056585   22320 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 18:06:07.077817   22320 out.go:169] Using Docker Desktop driver with root privileges
	I1003 18:06:07.099719   22320 cni.go:84] Creating CNI manager for ""
	I1003 18:06:07.099752   22320 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 18:06:07.099768   22320 start_flags.go:321] config:
	{Name:download-only-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:06:07.121631   22320 out.go:97] Starting control plane node download-only-991000 in cluster download-only-991000
	I1003 18:06:07.121672   22320 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:06:07.142717   22320 out.go:97] Pulling base image ...
	I1003 18:06:07.142777   22320 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 18:06:07.142884   22320 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon
	I1003 18:06:07.195001   22320 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 to local cache
	I1003 18:06:07.195290   22320 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local cache directory
	I1003 18:06:07.195412   22320 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 to local cache
	I1003 18:06:07.197433   22320 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1003 18:06:07.197447   22320 cache.go:57] Caching tarball of preloaded images
	I1003 18:06:07.197602   22320 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 18:06:07.218852   22320 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1003 18:06:07.218894   22320 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:07.302711   22320 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1003 18:06:13.892572   22320 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:13.892760   22320 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:14.443619   22320 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 18:06:14.443835   22320 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/download-only-991000/config.json ...
	I1003 18:06:14.443858   22320 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/download-only-991000/config.json: {Name:mkeca16f3afce621bbc18433c9ead4d3d5a23b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:06:14.444126   22320 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 18:06:14.444375   22320 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-991000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-991000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-991000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker : (10.106796192s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-991000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-991000: exit status 85 (286.683728ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-991000 | jenkins | v1.31.2 | 03 Oct 23 18:06 PDT |          |
	|         | -p download-only-991000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-991000 | jenkins | v1.31.2 | 03 Oct 23 18:06 PDT |          |
	|         | -p download-only-991000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 18:06:24
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:06:24.201001   22356 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:06:24.201608   22356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:24.201619   22356 out.go:309] Setting ErrFile to fd 2...
	I1003 18:06:24.201626   22356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:06:24.202217   22356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	W1003 18:06:24.202332   22356 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17348-21848/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17348-21848/.minikube/config/config.json: no such file or directory
	I1003 18:06:24.203673   22356 out.go:303] Setting JSON to true
	I1003 18:06:24.225842   22356 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5753,"bootTime":1696375831,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 18:06:24.225961   22356 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:06:24.247702   22356 out.go:97] [download-only-991000] minikube v1.31.2 on Darwin 14.0
	I1003 18:06:24.270721   22356 out.go:169] MINIKUBE_LOCATION=17348
	I1003 18:06:24.247877   22356 notify.go:220] Checking for updates...
	I1003 18:06:24.313563   22356 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	I1003 18:06:24.355627   22356 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:06:24.397539   22356 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:06:24.439488   22356 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	W1003 18:06:24.481611   22356 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 18:06:24.482418   22356 config.go:182] Loaded profile config "download-only-991000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1003 18:06:24.482507   22356 start.go:810] api.Load failed for download-only-991000: filestore "download-only-991000": Docker machine "download-only-991000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 18:06:24.482716   22356 driver.go:373] Setting default libvirt URI to qemu:///system
	W1003 18:06:24.482760   22356 start.go:810] api.Load failed for download-only-991000: filestore "download-only-991000": Docker machine "download-only-991000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 18:06:24.542186   22356 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:06:24.542344   22356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:06:24.647267   22356 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:06:24.635587987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:06:24.669413   22356 out.go:97] Using the docker driver based on existing profile
	I1003 18:06:24.669435   22356 start.go:298] selected driver: docker
	I1003 18:06:24.669443   22356 start.go:902] validating driver "docker" against &{Name:download-only-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-991000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:06:24.669688   22356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:06:24.772989   22356 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:06:24.761952476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:06:24.776247   22356 cni.go:84] Creating CNI manager for ""
	I1003 18:06:24.776271   22356 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 18:06:24.776284   22356 start_flags.go:321] config:
	{Name:download-only-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:06:24.797589   22356 out.go:97] Starting control plane node download-only-991000 in cluster download-only-991000
	I1003 18:06:24.797627   22356 cache.go:122] Beginning downloading kic base image for docker with docker
	I1003 18:06:24.819379   22356 out.go:97] Pulling base image ...
	I1003 18:06:24.819469   22356 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:06:24.819518   22356 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon
	I1003 18:06:24.868969   22356 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 18:06:24.868992   22356 cache.go:57] Caching tarball of preloaded images
	I1003 18:06:24.869217   22356 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:06:24.890439   22356 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1003 18:06:24.890457   22356 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:24.893698   22356 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 to local cache
	I1003 18:06:24.893853   22356 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local cache directory
	I1003 18:06:24.893873   22356 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local cache directory, skipping pull
	I1003 18:06:24.893878   22356 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 exists in cache, skipping pull
	I1003 18:06:24.893907   22356 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 as a tarball
	I1003 18:06:24.974001   22356 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1003 18:06:31.895306   22356 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:31.895488   22356 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1003 18:06:32.521066   22356 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 18:06:32.521158   22356 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/download-only-991000/config.json ...
	I1003 18:06:32.521524   22356 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 18:06:32.521735   22356 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/darwin/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-991000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-991000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-423000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-423000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-423000
--- PASS: TestDownloadOnlyKic (2.01s)

                                                
                                    
x
+
TestBinaryMirror (1.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-040000 --alsologtostderr --binary-mirror http://127.0.0.1:55284 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-040000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-040000
--- PASS: TestBinaryMirror (1.61s)

                                                
                                    
x
+
TestAddons/Setup (149.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-431000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p addons-431000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.847419947s)
--- PASS: TestAddons/Setup (149.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kshh8" [63f8d760-0d21-4a45-9fd3-a8d52ffb6b79] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011893169s
addons_test.go:819: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-431000
addons_test.go:819: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-431000: (5.807308249s)
--- PASS: TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 5.629631ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-kd2rm" [24684d8a-cc60-421c-a380-e4e79c9e4f25] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014869953s
addons_test.go:393: (dbg) Run:  kubectl --context addons-431000 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 4.048862ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-l8j7r" [438eecc1-4b9c-4be1-b81d-5cfa424a58c7] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013770725s
addons_test.go:451: (dbg) Run:  kubectl --context addons-431000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-431000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.183782996s)
addons_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 34.42689ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-431000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-431000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e0a4525b-4c12-4ac3-ae4b-33ac77161c1e] Pending
helpers_test.go:344: "task-pv-pod" [e0a4525b-4c12-4ac3-ae4b-33ac77161c1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e0a4525b-4c12-4ac3-ae4b-33ac77161c1e] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012887623s
addons_test.go:562: (dbg) Run:  kubectl --context addons-431000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-431000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-431000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-431000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-431000 delete pod task-pv-pod
addons_test.go:572: (dbg) Done: kubectl --context addons-431000 delete pod task-pv-pod: (1.11579894s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-431000 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-431000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-431000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77e04baf-5914-4a81-a2de-e38e949d863a] Pending
helpers_test.go:344: "task-pv-pod-restore" [77e04baf-5914-4a81-a2de-e38e949d863a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77e04baf-5914-4a81-a2de-e38e949d863a] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.014738738s
addons_test.go:604: (dbg) Run:  kubectl --context addons-431000 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-431000 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-431000 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-darwin-amd64 -p addons-431000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766134418s)
addons_test.go:620: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-431000 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-431000 --alsologtostderr -v=1: (1.486324069s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-2k2sn" [baafb57e-f90f-4eca-81fb-d8b82eff3f30] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-2k2sn" [baafb57e-f90f-4eca-81fb-d8b82eff3f30] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.012362278s
--- PASS: TestAddons/parallel/Headlamp (13.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-5nwj6" [619f7720-8b96-450c-878e-7bf22e23aa58] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011509164s
addons_test.go:838: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-431000
--- PASS: TestAddons/parallel/CloudSpanner (5.99s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-431000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-431000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d46220f3-a2a0-4933-a698-57a55f276bcb] Pending
helpers_test.go:344: "test-local-path" [d46220f3-a2a0-4933-a698-57a55f276bcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d46220f3-a2a0-4933-a698-57a55f276bcb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d46220f3-a2a0-4933-a698-57a55f276bcb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.01099729s
addons_test.go:869: (dbg) Run:  kubectl --context addons-431000 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 ssh "cat /opt/local-path-provisioner/pvc-fbeb7af8-abe6-4b65-8ad4-732505f7aaca_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-431000 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-431000 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-darwin-amd64 -p addons-431000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.415947751s)
--- PASS: TestAddons/parallel/LocalPath (52.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-431000 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-431000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-431000
addons_test.go:150: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-431000: (11.099852808s)
addons_test.go:154: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-431000
addons_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-431000
addons_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-431000
--- PASS: TestAddons/StoppedEnableDisable (11.78s)

                                                
                                    
x
+
TestErrorSpam/setup (22.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-744000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-744000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 --driver=docker : (22.541390982s)
--- PASS: TestErrorSpam/setup (22.54s)

                                                
                                    
x
+
TestErrorSpam/start (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 start --dry-run
--- PASS: TestErrorSpam/start (2.01s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (11.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 stop: (10.812271408s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-744000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-744000 stop
--- PASS: TestErrorSpam/stop (11.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/test/nested/copy/22318/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-323000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.445984386s)
--- PASS: TestFunctional/serial/StartWithProxy (37.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-323000 --alsologtostderr -v=8: (35.884341748s)
functional_test.go:659: soft start took 35.884761849s for "functional-323000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-323000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:3.1: (1.748150081s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:3.3: (1.687155818s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 cache add registry.k8s.io/pause:latest: (1.497616159s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local589155451/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache add minikube-local-cache-test:functional-323000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 cache add minikube-local-cache-test:functional-323000: (1.062396981s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache delete minikube-local-cache-test:functional-323000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-323000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (381.4631ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 cache reload: (1.131784345s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 kubectl -- --context functional-323000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-323000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 18:14:09.438508   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.444301   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.455320   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.477595   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.517773   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.597943   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:09.758061   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:10.078417   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:10.719209   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-323000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.257079383s)
functional_test.go:757: restart took 40.257252495s for "functional-323000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-323000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 logs
E1003 18:14:11.999597   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:14:14.559823   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 logs: (3.165098819s)
--- PASS: TestFunctional/serial/LogsCmd (3.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2071755615/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2071755615/001/logs.txt: (3.136243755s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/invalidsvc.yaml
E1003 18:14:19.682030   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-323000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-323000: exit status 115 (544.916656ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30215 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-323000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 config get cpus: exit status 14 (44.643482ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 config get cpus: exit status 14 (44.202132ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-323000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-323000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24566: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (706.805841ms)

                                                
                                                
-- stdout --
	* [functional-323000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:15:08.625454   24509 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:15:08.625742   24509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:15:08.625747   24509 out.go:309] Setting ErrFile to fd 2...
	I1003 18:15:08.625751   24509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:15:08.625939   24509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	I1003 18:15:08.627263   24509 out.go:303] Setting JSON to false
	I1003 18:15:08.649165   24509 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6277,"bootTime":1696375831,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 18:15:08.649256   24509 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:15:08.670701   24509 out.go:177] * [functional-323000] minikube v1.31.2 on Darwin 14.0
	I1003 18:15:08.712651   24509 out.go:177]   - MINIKUBE_LOCATION=17348
	I1003 18:15:08.712747   24509 notify.go:220] Checking for updates...
	I1003 18:15:08.755445   24509 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	I1003 18:15:08.813738   24509 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:15:08.857744   24509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:15:08.878606   24509 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	I1003 18:15:08.952990   24509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:15:08.975392   24509 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:15:08.976104   24509 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:15:09.033887   24509 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:15:09.034019   24509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:15:09.137988   24509 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:71 SystemTime:2023-10-04 01:15:09.127815209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:15:09.159852   24509 out.go:177] * Using the docker driver based on existing profile
	I1003 18:15:09.180609   24509 start.go:298] selected driver: docker
	I1003 18:15:09.180631   24509 start.go:902] validating driver "docker" against &{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-323000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:15:09.180731   24509 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:15:09.206491   24509 out.go:177] 
	W1003 18:15:09.227580   24509 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 18:15:09.250389   24509 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-323000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (611.596538ms)

                                                
                                                
-- stdout --
	* [functional-323000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:15:08.007397   24493 out.go:296] Setting OutFile to fd 1 ...
	I1003 18:15:08.007677   24493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:15:08.007683   24493 out.go:309] Setting ErrFile to fd 2...
	I1003 18:15:08.007687   24493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 18:15:08.007878   24493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
	I1003 18:15:08.009521   24493 out.go:303] Setting JSON to false
	I1003 18:15:08.031920   24493 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6277,"bootTime":1696375831,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1003 18:15:08.032028   24493 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 18:15:08.056672   24493 out.go:177] * [functional-323000] minikube v1.31.2 sur Darwin 14.0
	I1003 18:15:08.099551   24493 out.go:177]   - MINIKUBE_LOCATION=17348
	I1003 18:15:08.099605   24493 notify.go:220] Checking for updates...
	I1003 18:15:08.142440   24493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
	I1003 18:15:08.164438   24493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1003 18:15:08.185524   24493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:15:08.207494   24493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
	I1003 18:15:08.229641   24493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:15:08.252158   24493 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 18:15:08.252882   24493 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 18:15:08.310533   24493 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1003 18:15:08.310663   24493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:15:08.409893   24493 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:71 SystemTime:2023-10-04 01:15:08.398346832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1003 18:15:08.452923   24493 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:15:08.474868   24493 start.go:298] selected driver: docker
	I1003 18:15:08.474898   24493 start.go:902] validating driver "docker" against &{Name:functional-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-323000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 18:15:08.475049   24493 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:15:08.500505   24493 out.go:177] 
	W1003 18:15:08.521679   24493 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:15:08.542744   24493 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9233d6f6-4fa9-4ccb-a5f4-08fb6d58995d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.018041539s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-323000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-323000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fc3f5ace-ad34-40bf-936c-163a6f4ccc93] Pending
helpers_test.go:344: "sp-pod" [fc3f5ace-ad34-40bf-936c-163a6f4ccc93] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fc3f5ace-ad34-40bf-936c-163a6f4ccc93] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.012840475s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-323000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-323000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [598c610d-c4cb-4493-8e45-6305558bdd03] Pending
helpers_test.go:344: "sp-pod" [598c610d-c4cb-4493-8e45-6305558bdd03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [598c610d-c4cb-4493-8e45-6305558bdd03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.012206987s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-323000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh -n functional-323000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 cp functional-323000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd735609644/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh -n functional-323000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-323000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-j9bjq" [2089dec4-3e3f-4708-8315-4d7b2771c3c7] Pending
helpers_test.go:344: "mysql-859648c796-j9bjq" [2089dec4-3e3f-4708-8315-4d7b2771c3c7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-j9bjq" [2089dec4-3e3f-4708-8315-4d7b2771c3c7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.016485641s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;": exit status 1 (138.909584ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;": exit status 1 (121.252697ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;": exit status 1 (124.015085ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-323000 exec mysql-859648c796-j9bjq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/22318/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /etc/test/nested/copy/22318/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/22318.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/22318.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/22318.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /usr/share/ca-certificates/22318.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/223182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/223182.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/223182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /usr/share/ca-certificates/223182.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-323000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh "sudo systemctl is-active crio": exit status 1 (373.637625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 24051: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-323000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fee2c758-8c2c-40c9-bd72-6b39235b4392] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fee2c758-8c2c-40c9-bd72-6b39235b4392] Running
E1003 18:14:29.922389   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.012516079s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-323000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-323000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 24088: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-323000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-323000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-p9pn7" [799d0968-af44-4f94-be27-5109f2f85760] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-p9pn7" [799d0968-af44-4f94-be27-5109f2f85760] Running
E1003 18:14:50.402719   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.015662601s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "450.369725ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "70.660331ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "386.667621ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "64.477595ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3395834259/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696382094591164000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3395834259/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696382094591164000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3395834259/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696382094591164000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3395834259/001/test-1696382094591164000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.943024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 01:14 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 01:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 01:14 test-1696382094591164000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh cat /mount-9p/test-1696382094591164000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-323000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0fd049f9-691a-4e32-80fa-cfa8eed70611] Pending
helpers_test.go:344: "busybox-mount" [0fd049f9-691a-4e32-80fa-cfa8eed70611] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0fd049f9-691a-4e32-80fa-cfa8eed70611] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0fd049f9-691a-4e32-80fa-cfa8eed70611] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013339659s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-323000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3395834259/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 service list -o json
functional_test.go:1493: Took "604.727775ms" to run "out/minikube-darwin-amd64 -p functional-323000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 service --namespace=default --https --url hello-node: signal: killed (15.002066702s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56127

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:56127
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2259217428/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.626381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2259217428/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh "sudo umount -f /mount-9p": exit status 1 (346.869292ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-323000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2259217428/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T" /mount1: exit status 1 (446.07924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-323000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-323000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1604157614/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 service hello-node --url --format={{.IP}}
2023/10/03 18:15:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 service hello-node --url --format={{.IP}}: signal: killed (15.002059552s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-323000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-323000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-323000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-323000 image ls --format short --alsologtostderr:
I1003 18:15:45.498672   24903 out.go:296] Setting OutFile to fd 1 ...
I1003 18:15:45.498981   24903 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:45.498986   24903 out.go:309] Setting ErrFile to fd 2...
I1003 18:15:45.498990   24903 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:45.499171   24903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:15:45.499777   24903 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:45.499867   24903 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:45.500310   24903 cli_runner.go:164] Run: docker container inspect functional-323000 --format={{.State.Status}}
I1003 18:15:45.551338   24903 ssh_runner.go:195] Run: systemctl --version
I1003 18:15:45.551413   24903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-323000
I1003 18:15:45.602579   24903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55878 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/functional-323000/id_rsa Username:docker}
I1003 18:15:45.692965   24903 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-323000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-323000 | 8a46877a7e623 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-323000 | 613d0514f6edd | 30B    |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-323000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-323000 image ls --format table --alsologtostderr:
I1003 18:15:50.063393   24943 out.go:296] Setting OutFile to fd 1 ...
I1003 18:15:50.063691   24943 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:50.063698   24943 out.go:309] Setting ErrFile to fd 2...
I1003 18:15:50.063702   24943 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:50.063893   24943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:15:50.064514   24943 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:50.064625   24943 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:50.065045   24943 cli_runner.go:164] Run: docker container inspect functional-323000 --format={{.State.Status}}
I1003 18:15:50.118482   24943 ssh_runner.go:195] Run: systemctl --version
I1003 18:15:50.118618   24943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-323000
I1003 18:15:50.177579   24943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55878 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/functional-323000/id_rsa Username:docker}
I1003 18:15:50.267470   24943 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-323000 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"8a46877a7e623802bfc043b1275d503364c5290137ffe9e2502be04f712dc737","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-323000"],"size":"1240000"},{"id":"613d0514f6edd875199f5a6e3a32c87ceba7eccf1f1b1271593d141c420c1f83","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-323000"],"size":"30"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e
38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests"
:[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-323000"],"size":"32900000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k
8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-323000 image ls --format json --alsologtostderr:
I1003 18:15:49.703923   24937 out.go:296] Setting OutFile to fd 1 ...
I1003 18:15:49.704473   24937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:49.704486   24937 out.go:309] Setting ErrFile to fd 2...
I1003 18:15:49.704496   24937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:49.704844   24937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:15:49.705882   24937 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:49.706159   24937 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:49.706746   24937 cli_runner.go:164] Run: docker container inspect functional-323000 --format={{.State.Status}}
I1003 18:15:49.758653   24937 ssh_runner.go:195] Run: systemctl --version
I1003 18:15:49.758725   24937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-323000
I1003 18:15:49.819502   24937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55878 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/functional-323000/id_rsa Username:docker}
I1003 18:15:49.957185   24937 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-323000 image ls --format yaml --alsologtostderr:
- id: 613d0514f6edd875199f5a6e3a32c87ceba7eccf1f1b1271593d141c420c1f83
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-323000
size: "30"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-323000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-323000 image ls --format yaml --alsologtostderr:
I1003 18:15:45.784825   24909 out.go:296] Setting OutFile to fd 1 ...
I1003 18:15:45.785120   24909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:45.785125   24909 out.go:309] Setting ErrFile to fd 2...
I1003 18:15:45.785129   24909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:45.785301   24909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:15:45.785916   24909 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:45.786006   24909 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:45.786447   24909 cli_runner.go:164] Run: docker container inspect functional-323000 --format={{.State.Status}}
I1003 18:15:45.838933   24909 ssh_runner.go:195] Run: systemctl --version
I1003 18:15:45.839004   24909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-323000
I1003 18:15:45.890677   24909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55878 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/functional-323000/id_rsa Username:docker}
I1003 18:15:45.977938   24909 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 ssh pgrep buildkitd: exit status 1 (342.602942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr: (2.981925917s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-323000 image build -t localhost/my-image:functional-323000 testdata/build --alsologtostderr:
I1003 18:15:46.411762   24925 out.go:296] Setting OutFile to fd 1 ...
I1003 18:15:46.412055   24925 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:46.412060   24925 out.go:309] Setting ErrFile to fd 2...
I1003 18:15:46.412064   24925 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:15:46.412244   24925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:15:46.412892   24925 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:46.413496   24925 config.go:182] Loaded profile config "functional-323000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 18:15:46.413901   24925 cli_runner.go:164] Run: docker container inspect functional-323000 --format={{.State.Status}}
I1003 18:15:46.466312   24925 ssh_runner.go:195] Run: systemctl --version
I1003 18:15:46.466385   24925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-323000
I1003 18:15:46.519595   24925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55878 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/functional-323000/id_rsa Username:docker}
I1003 18:15:46.610584   24925 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2444026597.tar
I1003 18:15:46.610677   24925 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 18:15:46.621997   24925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2444026597.tar
I1003 18:15:46.627132   24925 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2444026597.tar: stat -c "%s %y" /var/lib/minikube/build/build.2444026597.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2444026597.tar': No such file or directory
I1003 18:15:46.627176   24925 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2444026597.tar --> /var/lib/minikube/build/build.2444026597.tar (3072 bytes)
I1003 18:15:46.653476   24925 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2444026597
I1003 18:15:46.664717   24925 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2444026597 -xf /var/lib/minikube/build/build.2444026597.tar
I1003 18:15:46.676129   24925 docker.go:340] Building image: /var/lib/minikube/build/build.2444026597
I1003 18:15:46.676209   24925 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-323000 /var/lib/minikube/build/build.2444026597
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8a46877a7e623802bfc043b1275d503364c5290137ffe9e2502be04f712dc737 done
#8 naming to localhost/my-image:functional-323000 done
#8 DONE 0.0s
I1003 18:15:49.293655   24925 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-323000 /var/lib/minikube/build/build.2444026597: (2.617409106s)
I1003 18:15:49.293715   24925 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2444026597
I1003 18:15:49.304734   24925 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2444026597.tar
I1003 18:15:49.315113   24925 build_images.go:207] Built localhost/my-image:functional-323000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2444026597.tar
I1003 18:15:49.315140   24925 build_images.go:123] succeeded building to: functional-323000
I1003 18:15:49.315146   24925 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.325973486s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (3.775715564s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-323000 service hello-node --url: signal: killed (15.001935423s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56247

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:56247
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
E1003 18:15:31.363986   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (2.038223846s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.986837981s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-323000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image load --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (3.121794194s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image save gcr.io/google-containers/addon-resizer:functional-323000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image save gcr.io/google-containers/addon-resizer:functional-323000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.236206523s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image rm gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.650278557s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-323000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 image save --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-323000 image save --daemon gcr.io/google-containers/addon-resizer:functional-323000 --alsologtostderr: (1.236572164s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-323000 docker-env) && out/minikube-darwin-amd64 status -p functional-323000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-323000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-323000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-323000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-323000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-323000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-709000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-709000 --driver=docker : (22.30907764s)
--- PASS: TestImageBuild/serial/Setup (22.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-709000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-709000: (1.69323498s)
--- PASS: TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-709000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-709000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-709000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-662000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1003 18:24:23.952990   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:24:51.648964   22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-662000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (35.156560231s)
--- PASS: TestJSONOutput/start/Command (35.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-662000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-662000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-662000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-662000 --output=json --user=testUser: (5.867938476s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-435000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-435000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.353077ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad6f1f26-8a09-4921-a9a6-c9dfd6964338","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-435000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f472259e-60b8-4441-ad38-a68bfa16fa59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17348"}}
	{"specversion":"1.0","id":"53a5c86d-e2ea-4b0f-ada3-743b1e2eb846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig"}}
	{"specversion":"1.0","id":"34d39e70-2bda-4e40-8aff-bd2bc8ac000b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"34fba617-6957-4375-8f32-33f383301eae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae6286a4-8e55-4e36-842b-441f68b77fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube"}}
	{"specversion":"1.0","id":"faa4fe60-3165-4e56-9bd7-4918de5e4057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"113ff7b9-4607-42ab-9a36-0015b7ad5719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-435000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-714000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-714000 --network=: (21.544822473s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-714000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-714000: (2.440440618s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-873000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-873000 --network=bridge: (21.685456744s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-873000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-873000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-873000: (2.289543609s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.03s)

                                                
                                    
x
+
TestKicExistingNetwork (24.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-008000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-008000 --network=existing-network: (21.452833242s)
helpers_test.go:175: Cleaning up "existing-network-008000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-008000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-008000: (2.298807826s)
--- PASS: TestKicExistingNetwork (24.10s)

                                                
                                    
x
+
TestKicCustomSubnet (24.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-149000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-149000 --subnet=192.168.60.0/24: (22.066385091s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-149000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-149000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-149000: (2.221939189s)
--- PASS: TestKicCustomSubnet (24.34s)

                                                
                                    
x
+
TestKicStaticIP (24.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-628000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-628000 --static-ip=192.168.200.200: (21.853283426s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-628000 ip
helpers_test.go:175: Cleaning up "static-ip-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-628000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-628000: (2.434524349s)
--- PASS: TestKicStaticIP (24.51s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (50.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-539000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-539000 --driver=docker : (22.258324796s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-541000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-541000 --driver=docker : (21.924871913s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-539000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-541000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-541000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-541000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-541000: (2.467703914s)
helpers_test.go:175: Cleaning up "first-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-539000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-539000: (2.521834188s)
--- PASS: TestMinikubeProfile (50.79s)

                                                
                                    

Test skip (16/153)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 15.957039ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xf6s7" [db8dcf3d-3d7e-4cba-8e99-918ce556a25b] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013002895s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lgxdl" [6efa156f-80d1-48c4-9eda-a0c9791740c8] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012550923s
addons_test.go:318: (dbg) Run:  kubectl --context addons-431000 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-431000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-431000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.60071722s)
addons_test.go:333: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-431000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-431000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-431000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [94b02667-46a2-4b86-9193-a95d5e9b2ee0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [94b02667-46a2-4b86-9193-a95d5e9b2ee0] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.061868936s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p addons-431000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.23s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-323000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-323000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-jmw49" [19bc52b7-abd4-45f3-971d-2fab0b41080f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-jmw49" [19bc52b7-abd4-45f3-971d-2fab0b41080f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.079543305s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
Copied to clipboard