Test Report: Docker_macOS 17885

                    
                      b721bab7b488b5e07b471be256ee12ce84535d3b:2024-01-03:32546
                    
                

Test fail (14/329)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (261.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0103 12:04:39.355774   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:04:58.342293   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.352492   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.362771   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.384866   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.425099   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.507231   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.667727   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.988221   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:59.630064   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:00.910409   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:03.472618   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:07.040991   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:05:08.594706   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:18.835746   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:39.315456   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:06:20.274639   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.083305452s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-996000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-996000 in cluster ingress-addon-legacy-996000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:02:28.804728   14062 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:02:28.804947   14062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:02:28.804952   14062 out.go:309] Setting ErrFile to fd 2...
	I0103 12:02:28.804956   14062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:02:28.805144   14062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:02:28.806610   14062 out.go:303] Setting JSON to false
	I0103 12:02:28.829133   14062 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5518,"bootTime":1704306630,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:02:28.829228   14062 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:02:28.850878   14062 out.go:177] * [ingress-addon-legacy-996000] minikube v1.32.0 on Darwin 14.2
	I0103 12:02:28.908423   14062 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:02:28.908522   14062 notify.go:220] Checking for updates...
	I0103 12:02:28.929553   14062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:02:28.952648   14062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:02:28.973267   14062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:02:28.994519   14062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:02:29.036391   14062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:02:29.058098   14062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:02:29.115827   14062 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:02:29.115980   14062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:02:29.215750   14062 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-03 20:02:29.206510638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:02:29.237400   14062 out.go:177] * Using the docker driver based on user configuration
	I0103 12:02:29.258963   14062 start.go:298] selected driver: docker
	I0103 12:02:29.258993   14062 start.go:902] validating driver "docker" against <nil>
	I0103 12:02:29.259007   14062 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:02:29.263436   14062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:02:29.364874   14062 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-03 20:02:29.356293842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:02:29.365059   14062 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 12:02:29.365246   14062 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 12:02:29.386558   14062 out.go:177] * Using Docker Desktop driver with root privileges
	I0103 12:02:29.407629   14062 cni.go:84] Creating CNI manager for ""
	I0103 12:02:29.407673   14062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:02:29.407693   14062 start_flags.go:323] config:
	{Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:02:29.450500   14062 out.go:177] * Starting control plane node ingress-addon-legacy-996000 in cluster ingress-addon-legacy-996000
	I0103 12:02:29.471600   14062 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:02:29.494429   14062 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:02:29.536398   14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0103 12:02:29.536496   14062 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:02:29.591211   14062 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:02:29.591238   14062 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:02:29.595294   14062 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0103 12:02:29.595307   14062 cache.go:56] Caching tarball of preloaded images
	I0103 12:02:29.595492   14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0103 12:02:29.616503   14062 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0103 12:02:29.659309   14062 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0103 12:02:29.744685   14062 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0103 12:02:36.121197   14062 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0103 12:02:36.121372   14062 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0103 12:02:36.755953   14062 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0103 12:02:36.756212   14062 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json ...
	I0103 12:02:36.756235   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json: {Name:mk0057a77f8a4872e0e4ef2d65f0a305812e68d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:36.756526   14062 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:02:36.756558   14062 start.go:365] acquiring machines lock for ingress-addon-legacy-996000: {Name:mk776a6ad7fbaf0f5c5fac522d51577a218f4dfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:02:36.756656   14062 start.go:369] acquired machines lock for "ingress-addon-legacy-996000" in 87.598µs
	I0103 12:02:36.756678   14062 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 12:02:36.756723   14062 start.go:125] createHost starting for "" (driver="docker")
	I0103 12:02:36.790798   14062 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0103 12:02:36.790996   14062 start.go:159] libmachine.API.Create for "ingress-addon-legacy-996000" (driver="docker")
	I0103 12:02:36.791022   14062 client.go:168] LocalClient.Create starting
	I0103 12:02:36.791116   14062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem
	I0103 12:02:36.791161   14062 main.go:141] libmachine: Decoding PEM data...
	I0103 12:02:36.791178   14062 main.go:141] libmachine: Parsing certificate...
	I0103 12:02:36.791221   14062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem
	I0103 12:02:36.791255   14062 main.go:141] libmachine: Decoding PEM data...
	I0103 12:02:36.791267   14062 main.go:141] libmachine: Parsing certificate...
	I0103 12:02:36.811464   14062 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 12:02:36.864258   14062 cli_runner.go:211] docker network inspect ingress-addon-legacy-996000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 12:02:36.864387   14062 network_create.go:281] running [docker network inspect ingress-addon-legacy-996000] to gather additional debugging logs...
	I0103 12:02:36.864407   14062 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996000
	W0103 12:02:36.914865   14062 cli_runner.go:211] docker network inspect ingress-addon-legacy-996000 returned with exit code 1
	I0103 12:02:36.914893   14062 network_create.go:284] error running [docker network inspect ingress-addon-legacy-996000]: docker network inspect ingress-addon-legacy-996000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-996000 not found
	I0103 12:02:36.914907   14062 network_create.go:286] output of [docker network inspect ingress-addon-legacy-996000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-996000 not found
	
	** /stderr **
	I0103 12:02:36.915039   14062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 12:02:36.966424   14062 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005ad4b0}
	I0103 12:02:36.966462   14062 network_create.go:124] attempt to create docker network ingress-addon-legacy-996000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0103 12:02:36.966541   14062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 ingress-addon-legacy-996000
	I0103 12:02:37.052462   14062 network_create.go:108] docker network ingress-addon-legacy-996000 192.168.49.0/24 created
	I0103 12:02:37.052527   14062 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-996000" container
	I0103 12:02:37.052658   14062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 12:02:37.103600   14062 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-996000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --label created_by.minikube.sigs.k8s.io=true
	I0103 12:02:37.155536   14062 oci.go:103] Successfully created a docker volume ingress-addon-legacy-996000
	I0103 12:02:37.155669   14062 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-996000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --entrypoint /usr/bin/test -v ingress-addon-legacy-996000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 12:02:37.519639   14062 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-996000
	I0103 12:02:37.519680   14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0103 12:02:37.519693   14062 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 12:02:37.519814   14062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 12:02:39.873546   14062 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (2.353711583s)
	I0103 12:02:39.873573   14062 kic.go:203] duration metric: took 2.353938 seconds to extract preloaded images to volume
	I0103 12:02:39.873682   14062 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 12:02:39.974396   14062 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-996000 --name ingress-addon-legacy-996000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --network ingress-addon-legacy-996000 --ip 192.168.49.2 --volume ingress-addon-legacy-996000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 12:02:40.245678   14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Running}}
	I0103 12:02:40.301294   14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:02:40.356518   14062 cli_runner.go:164] Run: docker exec ingress-addon-legacy-996000 stat /var/lib/dpkg/alternatives/iptables
	I0103 12:02:40.508663   14062 oci.go:144] the created container "ingress-addon-legacy-996000" has a running status.
	I0103 12:02:40.508707   14062 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa...
	I0103 12:02:40.594462   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 12:02:40.594523   14062 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 12:02:40.662628   14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:02:40.717647   14062 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 12:02:40.717670   14062 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-996000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 12:02:40.825230   14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:02:40.877119   14062 machine.go:88] provisioning docker machine ...
	I0103 12:02:40.877180   14062 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-996000"
	I0103 12:02:40.877289   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:40.929446   14062 main.go:141] libmachine: Using SSH client type: native
	I0103 12:02:40.929784   14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 58372 <nil> <nil>}
	I0103 12:02:40.929800   14062 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-996000 && echo "ingress-addon-legacy-996000" | sudo tee /etc/hostname
	I0103 12:02:41.058475   14062 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-996000
	
	I0103 12:02:41.058563   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:41.111183   14062 main.go:141] libmachine: Using SSH client type: native
	I0103 12:02:41.111489   14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 58372 <nil> <nil>}
	I0103 12:02:41.111507   14062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-996000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-996000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-996000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:02:41.231939   14062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:02:41.231969   14062 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:02:41.231994   14062 ubuntu.go:177] setting up certificates
	I0103 12:02:41.232002   14062 provision.go:83] configureAuth start
	I0103 12:02:41.232070   14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
	I0103 12:02:41.284227   14062 provision.go:138] copyHostCerts
	I0103 12:02:41.284270   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:02:41.284325   14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:02:41.284332   14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:02:41.284471   14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:02:41.284661   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:02:41.284688   14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:02:41.284693   14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:02:41.284785   14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:02:41.284931   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:02:41.284969   14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:02:41.284974   14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:02:41.285059   14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:02:41.285215   14062 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-996000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-996000]
	I0103 12:02:41.531802   14062 provision.go:172] copyRemoteCerts
	I0103 12:02:41.531869   14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:02:41.531940   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:41.584780   14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:02:41.673213   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 12:02:41.673285   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:02:41.693969   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 12:02:41.694037   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 12:02:41.714129   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 12:02:41.714217   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 12:02:41.734862   14062 provision.go:86] duration metric: configureAuth took 502.855937ms
	I0103 12:02:41.734879   14062 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:02:41.735027   14062 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:02:41.735100   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:41.787939   14062 main.go:141] libmachine: Using SSH client type: native
	I0103 12:02:41.788242   14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 58372 <nil> <nil>}
	I0103 12:02:41.788259   14062 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:02:41.907743   14062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:02:41.907760   14062 ubuntu.go:71] root file system type: overlay
	I0103 12:02:41.907864   14062 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:02:41.907950   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:41.960776   14062 main.go:141] libmachine: Using SSH client type: native
	I0103 12:02:41.961111   14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 58372 <nil> <nil>}
	I0103 12:02:41.961162   14062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:02:42.089486   14062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:02:42.089580   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:42.144997   14062 main.go:141] libmachine: Using SSH client type: native
	I0103 12:02:42.145322   14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 58372 <nil> <nil>}
	I0103 12:02:42.145337   14062 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:02:42.699552   14062 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:02:42.087182906 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0103 12:02:42.699577   14062 machine.go:91] provisioned docker machine in 1.822463077s
	I0103 12:02:42.699584   14062 client.go:171] LocalClient.Create took 5.908706394s
	I0103 12:02:42.699611   14062 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-996000" took 5.908763917s
	I0103 12:02:42.699623   14062 start.go:300] post-start starting for "ingress-addon-legacy-996000" (driver="docker")
	I0103 12:02:42.699632   14062 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:02:42.699698   14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:02:42.699760   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:42.751380   14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:02:42.839441   14062 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:02:42.843240   14062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:02:42.843268   14062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:02:42.843276   14062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:02:42.843282   14062 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:02:42.843293   14062 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:02:42.843385   14062 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:02:42.843563   14062 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:02:42.843575   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> /etc/ssl/certs/110902.pem
	I0103 12:02:42.843809   14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:02:42.851659   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:02:42.871685   14062 start.go:303] post-start completed in 172.057141ms
	I0103 12:02:42.872260   14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
	I0103 12:02:42.924398   14062 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json ...
	I0103 12:02:42.924867   14062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:02:42.924925   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:42.976187   14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:02:43.060503   14062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:02:43.065284   14062 start.go:128] duration metric: createHost completed in 6.308704109s
	I0103 12:02:43.065304   14062 start.go:83] releasing machines lock for "ingress-addon-legacy-996000", held for 6.308799819s
	I0103 12:02:43.065396   14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
	I0103 12:02:43.116690   14062 ssh_runner.go:195] Run: cat /version.json
	I0103 12:02:43.116720   14062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:02:43.116764   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:43.116800   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:43.170678   14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:02:43.170708   14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:02:43.363029   14062 ssh_runner.go:195] Run: systemctl --version
	I0103 12:02:43.367695   14062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 12:02:43.372477   14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 12:02:43.393726   14062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 12:02:43.393788   14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0103 12:02:43.408636   14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0103 12:02:43.423416   14062 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 12:02:43.423436   14062 start.go:475] detecting cgroup driver to use...
	I0103 12:02:43.423452   14062 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:02:43.423569   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:02:43.438314   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0103 12:02:43.447849   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:02:43.456858   14062 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:02:43.456917   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:02:43.466205   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:02:43.475253   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:02:43.484350   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:02:43.493472   14062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:02:43.502037   14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:02:43.511434   14062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:02:43.519355   14062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:02:43.527063   14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:02:43.578162   14062 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:02:43.662267   14062 start.go:475] detecting cgroup driver to use...
	I0103 12:02:43.662287   14062 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:02:43.662363   14062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:02:43.686480   14062 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:02:43.686545   14062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:02:43.697596   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:02:43.713403   14062 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:02:43.717834   14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:02:43.726976   14062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:02:43.744377   14062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:02:43.826492   14062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:02:43.922144   14062 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:02:43.922232   14062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:02:43.938424   14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:02:44.021042   14062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:02:44.255599   14062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:02:44.278836   14062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:02:44.324595   14062 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0103 12:02:44.324724   14062 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-996000 dig +short host.docker.internal
	I0103 12:02:44.448874   14062 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:02:44.448971   14062 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:02:44.453488   14062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:02:44.463686   14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:02:44.514893   14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0103 12:02:44.514967   14062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:02:44.534376   14062 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0103 12:02:44.534389   14062 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0103 12:02:44.534444   14062 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:02:44.542718   14062 ssh_runner.go:195] Run: which lz4
	I0103 12:02:44.546722   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0103 12:02:44.546848   14062 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0103 12:02:44.550877   14062 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 12:02:44.550903   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0103 12:02:50.221037   14062 docker.go:635] Took 5.674384 seconds to copy over tarball
	I0103 12:02:50.221101   14062 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 12:02:51.860731   14062 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.639654618s)
	I0103 12:02:51.860748   14062 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 12:02:51.905381   14062 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:02:51.914050   14062 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0103 12:02:51.929120   14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:02:51.980936   14062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:02:52.973556   14062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:02:52.992263   14062 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0103 12:02:52.992275   14062 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0103 12:02:52.992287   14062 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 12:02:53.001667   14062 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0103 12:02:53.001712   14062 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 12:02:53.001669   14062 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:02:53.001755   14062 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 12:02:53.001774   14062 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 12:02:53.001776   14062 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 12:02:53.001958   14062 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 12:02:53.004009   14062 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 12:02:53.006555   14062 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:02:53.006760   14062 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 12:02:53.006978   14062 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 12:02:53.007080   14062 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 12:02:53.007125   14062 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 12:02:53.007600   14062 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 12:02:53.008105   14062 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0103 12:02:53.009302   14062 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 12:02:53.512576   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 12:02:53.531360   14062 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0103 12:02:53.531402   14062 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 12:02:53.531466   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 12:02:53.541015   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0103 12:02:53.541798   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0103 12:02:53.549358   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0103 12:02:53.550369   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0103 12:02:53.563757   14062 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0103 12:02:53.563791   14062 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 12:02:53.563926   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0103 12:02:53.565163   14062 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0103 12:02:53.565186   14062 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0103 12:02:53.565248   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0103 12:02:53.584868   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0103 12:02:53.584955   14062 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0103 12:02:53.584975   14062 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 12:02:53.585038   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0103 12:02:53.589662   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0103 12:02:53.594410   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0103 12:02:53.609010   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0103 12:02:53.610632   14062 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0103 12:02:53.610656   14062 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 12:02:53.610715   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0103 12:02:53.628211   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0103 12:02:53.637415   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:02:53.661254   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0103 12:02:53.678517   14062 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0103 12:02:53.678546   14062 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I0103 12:02:53.678607   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0103 12:02:53.682905   14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0103 12:02:53.697212   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0103 12:02:53.702143   14062 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0103 12:02:53.702166   14062 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I0103 12:02:53.702233   14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0103 12:02:53.719030   14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0103 12:02:53.719077   14062 cache_images.go:92] LoadImages completed in 726.799551ms
	W0103 12:02:53.719111   14062 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0103 12:02:53.719182   14062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:02:53.767888   14062 cni.go:84] Creating CNI manager for ""
	I0103 12:02:53.767907   14062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:02:53.767919   14062 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:02:53.767939   14062 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-996000 NodeName:ingress-addon-legacy-996000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 12:02:53.768041   14062 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-996000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:02:53.768094   14062 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-996000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:02:53.768158   14062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0103 12:02:53.776471   14062 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:02:53.776527   14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:02:53.784412   14062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0103 12:02:53.799563   14062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0103 12:02:53.815176   14062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0103 12:02:53.841967   14062 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:02:53.846119   14062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:02:53.856585   14062 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000 for IP: 192.168.49.2
	I0103 12:02:53.856608   14062 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:53.856787   14062 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:02:53.856889   14062 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:02:53.856947   14062 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key
	I0103 12:02:53.856961   14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt with IP's: []
	I0103 12:02:54.125255   14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt ...
	I0103 12:02:54.125269   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt: {Name:mk3fad42c70d612449fc9d243d5b4fcc559d2f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.125587   14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key ...
	I0103 12:02:54.125596   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key: {Name:mk1bc7cf77add520d8f141f41b4a723ff72481f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.125819   14062 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2
	I0103 12:02:54.125840   14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 12:02:54.214162   14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 ...
	I0103 12:02:54.214174   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2: {Name:mkcacd6c5af309e011baeadc8b6a0a3fb281f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.214454   14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2 ...
	I0103 12:02:54.214463   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2: {Name:mke1995c76463b6a38c3c3214ea4cecf1304f436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.214658   14062 certs.go:337] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt
	I0103 12:02:54.214834   14062 certs.go:341] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key
	I0103 12:02:54.215002   14062 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key
	I0103 12:02:54.215016   14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt with IP's: []
	I0103 12:02:54.585890   14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt ...
	I0103 12:02:54.585905   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt: {Name:mk01de5b8eec29fcaf2b145d43418e4d3023c940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.586185   14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key ...
	I0103 12:02:54.586200   14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key: {Name:mkf84a96104b8716a9cd667aa3d9e48ed023e399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:02:54.586419   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 12:02:54.586451   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 12:02:54.586470   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 12:02:54.586487   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 12:02:54.586505   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 12:02:54.586530   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 12:02:54.586546   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 12:02:54.586564   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 12:02:54.586649   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:02:54.586708   14062 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:02:54.586718   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:02:54.586754   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:02:54.586782   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:02:54.586815   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:02:54.586876   14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:02:54.586909   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:02:54.586930   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem -> /usr/share/ca-certificates/11090.pem
	I0103 12:02:54.586946   14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> /usr/share/ca-certificates/110902.pem
	I0103 12:02:54.587393   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:02:54.607914   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 12:02:54.628089   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:02:54.648598   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 12:02:54.668969   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:02:54.689963   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:02:54.710743   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:02:54.731103   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:02:54.750939   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:02:54.771579   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:02:54.791861   14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:02:54.812207   14062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:02:54.827571   14062 ssh_runner.go:195] Run: openssl version
	I0103 12:02:54.832969   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:02:54.841908   14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:02:54.845994   14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:02:54.846041   14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:02:54.852372   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:02:54.861481   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:02:54.870395   14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:02:54.874637   14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:02:54.874685   14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:02:54.881350   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:02:54.890315   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:02:54.899028   14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:02:54.903032   14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:02:54.903080   14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:02:54.909580   14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:02:54.918423   14062 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:02:54.922417   14062 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 12:02:54.922466   14062 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:02:54.922561   14062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:02:54.939486   14062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:02:54.947731   14062 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:02:54.955815   14062 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:02:54.955878   14062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:02:54.963864   14062 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:02:54.963891   14062 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:02:55.009324   14062 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0103 12:02:55.009376   14062 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:02:55.232707   14062 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:02:55.232799   14062 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:02:55.232915   14062 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:02:55.400893   14062 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:02:55.401655   14062 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:02:55.401694   14062 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 12:02:55.477190   14062 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:02:55.498754   14062 out.go:204]   - Generating certificates and keys ...
	I0103 12:02:55.498846   14062 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:02:55.498917   14062 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:02:55.706912   14062 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 12:02:55.782427   14062 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 12:02:55.916209   14062 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 12:02:56.016246   14062 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 12:02:56.263324   14062 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 12:02:56.263443   14062 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 12:02:56.457698   14062 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 12:02:56.457817   14062 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 12:02:56.507725   14062 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 12:02:56.597866   14062 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 12:02:56.785701   14062 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 12:02:56.785805   14062 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:02:56.856390   14062 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:02:57.065633   14062 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:02:57.374571   14062 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:02:57.598758   14062 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:02:57.599626   14062 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:02:57.621123   14062 out.go:204]   - Booting up control plane ...
	I0103 12:02:57.621239   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:02:57.621325   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:02:57.621416   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:02:57.621510   14062 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:02:57.621695   14062 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:03:37.607377   14062 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:03:37.607913   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:03:37.608138   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:03:42.609092   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:03:42.609310   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:03:52.609235   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:03:52.609401   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:04:12.610413   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:04:12.610634   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:04:52.611468   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:04:52.611778   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:04:52.611796   14062 kubeadm.go:322] 
	I0103 12:04:52.611885   14062 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0103 12:04:52.611982   14062 kubeadm.go:322] 		timed out waiting for the condition
	I0103 12:04:52.611997   14062 kubeadm.go:322] 
	I0103 12:04:52.612056   14062 kubeadm.go:322] 	This error is likely caused by:
	I0103 12:04:52.612114   14062 kubeadm.go:322] 		- The kubelet is not running
	I0103 12:04:52.612236   14062 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:04:52.612250   14062 kubeadm.go:322] 
	I0103 12:04:52.612363   14062 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:04:52.612397   14062 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0103 12:04:52.612433   14062 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0103 12:04:52.612439   14062 kubeadm.go:322] 
	I0103 12:04:52.612577   14062 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:04:52.612678   14062 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0103 12:04:52.612685   14062 kubeadm.go:322] 
	I0103 12:04:52.612785   14062 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:04:52.612914   14062 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:04:52.612983   14062 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0103 12:04:52.613011   14062 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0103 12:04:52.613017   14062 kubeadm.go:322] 
	I0103 12:04:52.614251   14062 kubeadm.go:322] W0103 20:02:55.009106    1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0103 12:04:52.614399   14062 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:04:52.614469   14062 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:04:52.614582   14062 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0103 12:04:52.614672   14062 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:04:52.614772   14062 kubeadm.go:322] W0103 20:02:57.604146    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 12:04:52.614904   14062 kubeadm.go:322] W0103 20:02:57.604917    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 12:04:52.614978   14062 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:04:52.615047   14062 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0103 12:04:52.615146   14062 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:02:55.009106    1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:02:57.604146    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:02:57.604917    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:02:55.009106    1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:02:57.604146    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:02:57.604917    1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0103 12:04:52.615181   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:04:53.030723   14062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:04:53.042912   14062 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:04:53.042984   14062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:04:53.051964   14062 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:04:53.051996   14062 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:04:53.098760   14062 kubeadm.go:322] W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0103 12:04:53.203831   14062 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:04:53.203940   14062 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:04:53.254294   14062 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0103 12:04:53.331921   14062 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:04:54.261401   14062 kubeadm.go:322] W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 12:04:54.262116   14062 kubeadm.go:322] W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 12:06:49.269158   14062 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:06:49.269226   14062 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0103 12:06:49.271588   14062 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0103 12:06:49.271639   14062 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:06:49.271711   14062 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:06:49.271781   14062 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:06:49.271844   14062 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:06:49.271959   14062 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:06:49.272077   14062 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:06:49.272109   14062 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 12:06:49.272152   14062 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:06:49.293296   14062 out.go:204]   - Generating certificates and keys ...
	I0103 12:06:49.293381   14062 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:06:49.293436   14062 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:06:49.293537   14062 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:06:49.293589   14062 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:06:49.293636   14062 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:06:49.293676   14062 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:06:49.293744   14062 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:06:49.293822   14062 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:06:49.293881   14062 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:06:49.293942   14062 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:06:49.294008   14062 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:06:49.294086   14062 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:06:49.294128   14062 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:06:49.294168   14062 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:06:49.294225   14062 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:06:49.294275   14062 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:06:49.294329   14062 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:06:49.315382   14062 out.go:204]   - Booting up control plane ...
	I0103 12:06:49.315525   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:06:49.315662   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:06:49.315802   14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:06:49.315956   14062 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:06:49.316233   14062 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:06:49.316312   14062 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:06:49.316417   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:06:49.316701   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:06:49.316802   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:06:49.317070   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:06:49.317180   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:06:49.317379   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:06:49.317461   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:06:49.317672   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:06:49.317753   14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:06:49.317948   14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:06:49.317963   14062 kubeadm.go:322] 
	I0103 12:06:49.318003   14062 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0103 12:06:49.318049   14062 kubeadm.go:322] 		timed out waiting for the condition
	I0103 12:06:49.318059   14062 kubeadm.go:322] 
	I0103 12:06:49.318099   14062 kubeadm.go:322] 	This error is likely caused by:
	I0103 12:06:49.318138   14062 kubeadm.go:322] 		- The kubelet is not running
	I0103 12:06:49.318253   14062 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:06:49.318264   14062 kubeadm.go:322] 
	I0103 12:06:49.318378   14062 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:06:49.318415   14062 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0103 12:06:49.318446   14062 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0103 12:06:49.318454   14062 kubeadm.go:322] 
	I0103 12:06:49.318561   14062 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:06:49.318653   14062 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0103 12:06:49.318668   14062 kubeadm.go:322] 
	I0103 12:06:49.318768   14062 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:06:49.318829   14062 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:06:49.318917   14062 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0103 12:06:49.318954   14062 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0103 12:06:49.318970   14062 kubeadm.go:322] 
	I0103 12:06:49.319007   14062 kubeadm.go:406] StartCluster complete in 3m54.402477068s
	I0103 12:06:49.319112   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:06:49.336809   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.336823   14062 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:06:49.336897   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:06:49.354415   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.354427   14062 logs.go:286] No container was found matching "etcd"
	I0103 12:06:49.354498   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:06:49.372526   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.372544   14062 logs.go:286] No container was found matching "coredns"
	I0103 12:06:49.372610   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:06:49.389656   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.389670   14062 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:06:49.389756   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:06:49.407719   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.407733   14062 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:06:49.407801   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:06:49.426164   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.426178   14062 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:06:49.426254   14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:06:49.443234   14062 logs.go:284] 0 containers: []
	W0103 12:06:49.443248   14062 logs.go:286] No container was found matching "kindnet"
	I0103 12:06:49.443261   14062 logs.go:123] Gathering logs for kubelet ...
	I0103 12:06:49.443274   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:06:49.478428   14062 logs.go:123] Gathering logs for dmesg ...
	I0103 12:06:49.478444   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:06:49.490406   14062 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:06:49.490420   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:06:49.553716   14062 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:06:49.553729   14062 logs.go:123] Gathering logs for Docker ...
	I0103 12:06:49.553737   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:06:49.568423   14062 logs.go:123] Gathering logs for container status ...
	I0103 12:06:49.568437   14062 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0103 12:06:49.616492   14062 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0103 12:06:49.616517   14062 out.go:239] * 
	* 
	W0103 12:06:49.616558   14062 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:06:49.616574   14062 out.go:239] * 
	* 
	W0103 12:06:49.617201   14062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:06:49.679537   14062 out.go:177] 
	W0103 12:06:49.721553   14062 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0103 20:04:53.098531    4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0103 20:04:54.261367    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0103 20:04:54.262066    4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:06:49.721606   14062 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0103 12:06:49.721638   14062 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0103 12:06:49.763383   14062 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-996000 addons enable ingress --alsologtostderr -v=5
E0103 12:07:42.194020   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-996000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m36.736540384s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:06:49.920162   14298 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:06:49.921052   14298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:06:49.921060   14298 out.go:309] Setting ErrFile to fd 2...
	I0103 12:06:49.921064   14298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:06:49.921260   14298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:06:49.921608   14298 mustload.go:65] Loading cluster: ingress-addon-legacy-996000
	I0103 12:06:49.921922   14298 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:06:49.921939   14298 addons.go:600] checking whether the cluster is paused
	I0103 12:06:49.922024   14298 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:06:49.922040   14298 host.go:66] Checking if "ingress-addon-legacy-996000" exists ...
	I0103 12:06:49.922449   14298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:06:49.973784   14298 ssh_runner.go:195] Run: systemctl --version
	I0103 12:06:49.973880   14298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:06:50.025721   14298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:06:50.110899   14298 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:06:50.149515   14298 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0103 12:06:50.170347   14298 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:06:50.170361   14298 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-996000"
	I0103 12:06:50.170369   14298 addons.go:237] Setting addon ingress=true in "ingress-addon-legacy-996000"
	I0103 12:06:50.170396   14298 host.go:66] Checking if "ingress-addon-legacy-996000" exists ...
	I0103 12:06:50.170705   14298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:06:50.243148   14298 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0103 12:06:50.264391   14298 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0103 12:06:50.305995   14298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0103 12:06:50.327593   14298 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0103 12:06:50.349556   14298 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 12:06:50.349588   14298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0103 12:06:50.349694   14298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:06:50.401223   14298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:06:50.494308   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:50.546660   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:50.546691   14298 retry.go:31] will retry after 372.99749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:50.920209   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:50.969958   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:50.969976   14298 retry.go:31] will retry after 404.544941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:51.375009   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:51.422646   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:51.422668   14298 retry.go:31] will retry after 724.470338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:52.149369   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:52.208759   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:52.208779   14298 retry.go:31] will retry after 629.218763ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:52.839028   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:52.889462   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:52.889482   14298 retry.go:31] will retry after 650.982399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:53.542313   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:53.593350   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:53.593387   14298 retry.go:31] will retry after 1.692195774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:55.285801   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:55.335447   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:55.335465   14298 retry.go:31] will retry after 2.929357556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:58.265182   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:06:58.314935   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:06:58.314966   14298 retry.go:31] will retry after 4.230065582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:02.545388   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:07:02.602171   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:02.602190   14298 retry.go:31] will retry after 6.214417418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:08.817231   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:07:08.865884   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:08.865902   14298 retry.go:31] will retry after 9.060542928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:17.928087   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:07:17.986600   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:17.986617   14298 retry.go:31] will retry after 9.124936181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:27.112971   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:07:27.163226   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:27.163244   14298 retry.go:31] will retry after 23.050967743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:50.213905   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:07:50.261293   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:07:50.261317   14298 retry.go:31] will retry after 36.171129457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:26.431852   14298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0103 12:08:26.483800   14298 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:26.483829   14298 addons.go:473] Verifying addon ingress=true in "ingress-addon-legacy-996000"
	I0103 12:08:26.505410   14298 out.go:177] * Verifying ingress addon...
	I0103 12:08:26.526765   14298 out.go:177] 
	W0103 12:08:26.548074   14298 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-996000" does not exist: client config: context "ingress-addon-legacy-996000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-996000" does not exist: client config: context "ingress-addon-legacy-996000" does not exist]
	W0103 12:08:26.548103   14298 out.go:239] * 
	* 
	W0103 12:08:26.552585   14298 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:08:26.574173   14298 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-996000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-996000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7",
	        "Created": "2024-01-03T20:02:40.025469109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52500,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:02:40.238393503Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hosts",
	        "LogPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7-json.log",
	        "Name": "/ingress-addon-legacy-996000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-996000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-996000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-996000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-996000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-996000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4624f500c247fa7e72e232d986f9afdd9d2a686d6e1a27e2a03353de71ce5afd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58372"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58373"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58374"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58370"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58371"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4624f500c247",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-996000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d304ef977175",
	                        "ingress-addon-legacy-996000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "eda735f8eb664470b63f7ecc64e683286603bbae78af1f2c9cde859c2d70a0ef",
	                    "EndpointID": "0a23cd1043a4e58c412a3f091b102ffe9f01a2e44c97ffd22d2af1f16b88e874",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000: exit status 6 (374.524428ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:08:27.016514   14350 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-996000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-996000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-996000 addons enable ingress-dns --alsologtostderr -v=5
E0103 12:09:39.347506   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:09:58.334492   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-996000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m38.93439923s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:08:27.084607   14360 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:08:27.085467   14360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:08:27.085474   14360 out.go:309] Setting ErrFile to fd 2...
	I0103 12:08:27.085478   14360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:08:27.085669   14360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:08:27.086015   14360 mustload.go:65] Loading cluster: ingress-addon-legacy-996000
	I0103 12:08:27.086310   14360 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:08:27.086326   14360 addons.go:600] checking whether the cluster is paused
	I0103 12:08:27.086410   14360 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:08:27.086428   14360 host.go:66] Checking if "ingress-addon-legacy-996000" exists ...
	I0103 12:08:27.086846   14360 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:08:27.137511   14360 ssh_runner.go:195] Run: systemctl --version
	I0103 12:08:27.137610   14360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:08:27.189750   14360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:08:27.287076   14360 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:08:27.409482   14360 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0103 12:08:27.430460   14360 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0103 12:08:27.430484   14360 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-996000"
	I0103 12:08:27.430499   14360 addons.go:237] Setting addon ingress-dns=true in "ingress-addon-legacy-996000"
	I0103 12:08:27.430549   14360 host.go:66] Checking if "ingress-addon-legacy-996000" exists ...
	I0103 12:08:27.431072   14360 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
	I0103 12:08:27.503120   14360 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0103 12:08:27.524528   14360 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0103 12:08:27.545604   14360 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 12:08:27.545640   14360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0103 12:08:27.545778   14360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
	I0103 12:08:27.598287   14360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
	I0103 12:08:27.693156   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:27.753564   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:27.753591   14360 retry.go:31] will retry after 303.755812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:28.057500   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:28.107489   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:28.107512   14360 retry.go:31] will retry after 336.544412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:28.445232   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:28.502495   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:28.502516   14360 retry.go:31] will retry after 634.765603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:29.137714   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:29.187883   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:29.187905   14360 retry.go:31] will retry after 784.582576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:29.974827   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:30.033024   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:30.033047   14360 retry.go:31] will retry after 852.028861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:30.886869   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:30.941726   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:30.941744   14360 retry.go:31] will retry after 2.585226526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:33.527169   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:33.584249   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:33.584285   14360 retry.go:31] will retry after 2.458762169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:36.043330   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:36.105159   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:36.105190   14360 retry.go:31] will retry after 5.26095994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:41.366408   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:41.416443   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:41.416465   14360 retry.go:31] will retry after 4.395388391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:45.811906   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:45.860318   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:45.860337   14360 retry.go:31] will retry after 5.62818683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:51.489378   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:08:51.545066   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:08:51.545085   14360 retry.go:31] will retry after 13.785637392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:09:05.332698   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:09:05.384526   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:09:05.384548   14360 retry.go:31] will retry after 17.912720805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:09:23.297386   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:09:23.350139   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:09:23.350159   14360 retry.go:31] will retry after 42.46618207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:10:05.815872   14360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0103 12:10:05.867311   14360 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0103 12:10:05.888796   14360 out.go:177] 
	W0103 12:10:05.909908   14360 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0103 12:10:05.909946   14360 out.go:239] * 
	* 
	W0103 12:10:05.914551   14360 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:10:05.935753   14360 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-996000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-996000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7",
	        "Created": "2024-01-03T20:02:40.025469109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52500,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:02:40.238393503Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hosts",
	        "LogPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7-json.log",
	        "Name": "/ingress-addon-legacy-996000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-996000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-996000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-996000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-996000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-996000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4624f500c247fa7e72e232d986f9afdd9d2a686d6e1a27e2a03353de71ce5afd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58372"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58373"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58374"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58370"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58371"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4624f500c247",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-996000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d304ef977175",
	                        "ingress-addon-legacy-996000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "eda735f8eb664470b63f7ecc64e683286603bbae78af1f2c9cde859c2d70a0ef",
	                    "EndpointID": "0a23cd1043a4e58c412a3f091b102ffe9f01a2e44c97ffd22d2af1f16b88e874",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000: exit status 6 (376.944137ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:10:06.380322   14405 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-996000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-996000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-996000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-996000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7",
	        "Created": "2024-01-03T20:02:40.025469109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52500,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:02:40.238393503Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/hosts",
	        "LogPath": "/var/lib/docker/containers/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7/d304ef9771755e02271f769ce6c7ce3668a7ca9206fbab529e8b297e7621f8e7-json.log",
	        "Name": "/ingress-addon-legacy-996000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-996000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-996000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42be373be4a161a3136798c97927609f8c30a4294a5e73101fa865147c5c219c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-996000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-996000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-996000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-996000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4624f500c247fa7e72e232d986f9afdd9d2a686d6e1a27e2a03353de71ce5afd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58372"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58373"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58374"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58370"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58371"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4624f500c247",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-996000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d304ef977175",
	                        "ingress-addon-legacy-996000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "eda735f8eb664470b63f7ecc64e683286603bbae78af1f2c9cde859c2d70a0ef",
	                    "EndpointID": "0a23cd1043a4e58c412a3f091b102ffe9f01a2e44c97ffd22d2af1f16b88e874",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-996000 -n ingress-addon-legacy-996000: exit status 6 (375.337434ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:10:06.809041   14417 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-996000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-996000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker : exit status 70 (52.480885303s)

                                                
                                                
-- stdout --
	! [running-upgrade-030000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2814273552
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:30:55.567369779 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-030000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:31:09.425049933 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-030000", then "minikube start -p running-upgrade-030000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 49.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 99.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 108.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 174.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 204.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 312.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 396.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 413.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 477.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 491.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 524.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:31:09.425049933 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker : exit status 70 (3.947131518s)

                                                
                                                
-- stdout --
	* [running-upgrade-030000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig4118146875
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-030000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.81000749.exe start -p running-upgrade-030000 --memory=2200 --vm-driver=docker : exit status 70 (4.013960253s)

                                                
                                                
-- stdout --
	* [running-upgrade-030000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2935427674
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-030000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:139: legacy v1.9.0 start failed: exit status 70
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-03 12:31:22.306305 -0800 PST m=+2471.357268221
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-030000
helpers_test.go:235: (dbg) docker inspect running-upgrade-030000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a",
	        "Created": "2024-01-03T20:31:04.129320256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:31:04.335803641Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a/hostname",
	        "HostsPath": "/var/lib/docker/containers/db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a/hosts",
	        "LogPath": "/var/lib/docker/containers/db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a/db720803eaac813589ef87990fcccfb28eeec8ff884340b41988aeba023c0e7a-json.log",
	        "Name": "/running-upgrade-030000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-030000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/339acd01859ed2daa132f63ebc752f312a64b2a30a7cdc0305ead1491a4834df-init/diff:/var/lib/docker/overlay2/7c1058393de05d57474dbcbd37c6ec709fbf33413894282a82077ef58fc6e698/diff:/var/lib/docker/overlay2/7dda4c9c0be5066e2ddf4e1e6242f63e77b78758b4834e1f63a750b14fbcc3fc/diff:/var/lib/docker/overlay2/280e807d2b1fd4ae6a18a171facb2d484076c4a0c1689f7626b27fb9a1920edc/diff:/var/lib/docker/overlay2/b304d7dac9ccbce70c1e2360b60e3dbb8f440d239d2c6d19c36a98f0f3ef93f6/diff:/var/lib/docker/overlay2/fda9bc765860b25ccfb6eb7b8571899e1f2ffd356e1194757ddc92e08749350e/diff:/var/lib/docker/overlay2/739a478ee396ce7662c29a7a292c715f4e9950ca4df160b543591a5cd3f2f991/diff:/var/lib/docker/overlay2/3c0f90e74fe176f514fdc0f57219012eed5ecfc3447df8c8c62677d03d138137/diff:/var/lib/docker/overlay2/98da65a42bcba99ba48f3e017b0ae50f5090bb3d14eb6bc0e68a3ce86c428add/diff:/var/lib/docker/overlay2/9299b2e2763b5eed5cffcf0d8be6a4006b334c6330b526b1d079b29a740eeb32/diff:/var/lib/docker/overlay2/3ebe52
1db2799715ea3f9b8f112788be312c6ea9f635bdf480aa11b2004b547b/diff:/var/lib/docker/overlay2/9b7e180a63cf14cb532c3673d813b37898abe62dd2bad4e0e92110d8610ec0f8/diff:/var/lib/docker/overlay2/ddf6f44bbb344c1e6a8334c6c9455eb5dfc26b41c8c8e6b02b753d6d6fe94e9f/diff:/var/lib/docker/overlay2/aa1c1a3edc77ab2fbbf17591e24f5a8d150bb589c1d7fbff7c92c8bac9ec86be/diff:/var/lib/docker/overlay2/3d23b5bc6d406820c1ab948362dfaf5e78f123d20b83ec8f8188371597a551e5/diff:/var/lib/docker/overlay2/4ce0c817f78b2c368c8e1a4165d97a417c85e82c84f76c7aa26ab307e79a07e7/diff:/var/lib/docker/overlay2/4733545d684690c16562ec8430aaf0c9c11d6ca0182484521c8dcfe01a712469/diff:/var/lib/docker/overlay2/ae33f553fbffcf84515eb8f460e586c2fab605eb2e5fac70cf9dc4c0a5d2c5f5/diff:/var/lib/docker/overlay2/bd519fcfb45a1d5babe79a9d7de0c3e41afdceae533bf99fc6efbd7243735acb/diff:/var/lib/docker/overlay2/7dc00b67b14575632e30faf9b738ddbc8047d2d2b0f3193df96dac7ecaa9498c/diff:/var/lib/docker/overlay2/b36c418a5162f80076f606a888e61689e66c635505ce213c8f4fbebb37e75e46/diff:/var/lib/d
ocker/overlay2/a89c18d13f8d0ef6346597a5bc6f50c7cbf664d26750fda226c75dd348d533ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/339acd01859ed2daa132f63ebc752f312a64b2a30a7cdc0305ead1491a4834df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/339acd01859ed2daa132f63ebc752f312a64b2a30a7cdc0305ead1491a4834df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/339acd01859ed2daa132f63ebc752f312a64b2a30a7cdc0305ead1491a4834df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-030000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-030000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-030000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-030000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-030000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2405b86692ebd788c045a384217c3d75c9c8ce83642d93db0b1ce965d2e5ad1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59676"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59677"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59675"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2405b86692e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "291205cb77ee42c655f535bd275deb9a0ded66fd209b7ee222bcaaa5cabe9f9d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "NetworkID": "f6d087d90d9a60fec835f9c6365cd7e1819a4856f9a6569205ba33cdfc735896",
	                    "EndpointID": "291205cb77ee42c655f535bd275deb9a0ded66fd209b7ee222bcaaa5cabe9f9d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-030000 -n running-upgrade-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-030000 -n running-upgrade-030000: exit status 6 (379.042863ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:31:22.733611   20471 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-030000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-030000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-030000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-030000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-030000: (2.152526984s)
--- FAIL: TestRunningBinaryUpgrade (67.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (570.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0103 12:32:42.450720   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m13.054301735s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-738000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-738000 in cluster kubernetes-upgrade-738000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:32:08.611913   20844 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:32:08.612127   20844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:32:08.612134   20844 out.go:309] Setting ErrFile to fd 2...
	I0103 12:32:08.612138   20844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:32:08.612323   20844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:32:08.613719   20844 out.go:303] Setting JSON to false
	I0103 12:32:08.636095   20844 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7298,"bootTime":1704306630,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:32:08.636212   20844 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:32:08.658955   20844 out.go:177] * [kubernetes-upgrade-738000] minikube v1.32.0 on Darwin 14.2
	I0103 12:32:08.700676   20844 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:32:08.700740   20844 notify.go:220] Checking for updates...
	I0103 12:32:08.742573   20844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:32:08.763606   20844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:32:08.784481   20844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:32:08.805534   20844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:32:08.826633   20844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:32:08.848256   20844 config.go:182] Loaded profile config "cert-expiration-730000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:32:08.848387   20844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:32:08.905004   20844 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:32:08.905173   20844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:32:09.004177   20844 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:32:08.994363649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:32:09.046864   20844 out.go:177] * Using the docker driver based on user configuration
	I0103 12:32:09.067876   20844 start.go:298] selected driver: docker
	I0103 12:32:09.067905   20844 start.go:902] validating driver "docker" against <nil>
	I0103 12:32:09.067918   20844 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:32:09.072220   20844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:32:09.174170   20844 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:32:09.164624921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:32:09.174340   20844 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 12:32:09.174531   20844 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 12:32:09.195979   20844 out.go:177] * Using Docker Desktop driver with root privileges
	I0103 12:32:09.216791   20844 cni.go:84] Creating CNI manager for ""
	I0103 12:32:09.216835   20844 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:32:09.216854   20844 start_flags.go:323] config:
	{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:32:09.238547   20844 out.go:177] * Starting control plane node kubernetes-upgrade-738000 in cluster kubernetes-upgrade-738000
	I0103 12:32:09.280625   20844 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:32:09.301397   20844 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:32:09.343604   20844 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:32:09.343698   20844 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0103 12:32:09.343733   20844 cache.go:56] Caching tarball of preloaded images
	I0103 12:32:09.343724   20844 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:32:09.343994   20844 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 12:32:09.344018   20844 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0103 12:32:09.344197   20844 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/config.json ...
	I0103 12:32:09.344938   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/config.json: {Name:mk141664f746bb249f7c302a110a190f1e46b658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:09.397320   20844 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:32:09.397558   20844 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:32:09.397589   20844 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:32:09.397635   20844 start.go:365] acquiring machines lock for kubernetes-upgrade-738000: {Name:mk8869f3f7d225e1a6198587201403ee92199d84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:32:09.397795   20844 start.go:369] acquired machines lock for "kubernetes-upgrade-738000" in 144.923µs
	I0103 12:32:09.397823   20844 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-738000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 12:32:09.397934   20844 start.go:125] createHost starting for "" (driver="docker")
	I0103 12:32:09.455555   20844 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 12:32:09.455931   20844 start.go:159] libmachine.API.Create for "kubernetes-upgrade-738000" (driver="docker")
	I0103 12:32:09.455983   20844 client.go:168] LocalClient.Create starting
	I0103 12:32:09.456142   20844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem
	I0103 12:32:09.456235   20844 main.go:141] libmachine: Decoding PEM data...
	I0103 12:32:09.456268   20844 main.go:141] libmachine: Parsing certificate...
	I0103 12:32:09.456377   20844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem
	I0103 12:32:09.456453   20844 main.go:141] libmachine: Decoding PEM data...
	I0103 12:32:09.456470   20844 main.go:141] libmachine: Parsing certificate...
	I0103 12:32:09.457248   20844 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-738000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 12:32:09.508682   20844 cli_runner.go:211] docker network inspect kubernetes-upgrade-738000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 12:32:09.508781   20844 network_create.go:281] running [docker network inspect kubernetes-upgrade-738000] to gather additional debugging logs...
	I0103 12:32:09.508800   20844 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-738000
	W0103 12:32:09.559703   20844 cli_runner.go:211] docker network inspect kubernetes-upgrade-738000 returned with exit code 1
	I0103 12:32:09.559735   20844 network_create.go:284] error running [docker network inspect kubernetes-upgrade-738000]: docker network inspect kubernetes-upgrade-738000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-738000 not found
	I0103 12:32:09.559751   20844 network_create.go:286] output of [docker network inspect kubernetes-upgrade-738000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-738000 not found
	
	** /stderr **
	I0103 12:32:09.559895   20844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 12:32:09.613832   20844 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0103 12:32:09.614180   20844 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e54d0}
	I0103 12:32:09.614197   20844 network_create.go:124] attempt to create docker network kubernetes-upgrade-738000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0103 12:32:09.614279   20844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 kubernetes-upgrade-738000
	W0103 12:32:09.665556   20844 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 kubernetes-upgrade-738000 returned with exit code 1
	W0103 12:32:09.665601   20844 network_create.go:149] failed to create docker network kubernetes-upgrade-738000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 kubernetes-upgrade-738000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0103 12:32:09.665620   20844 network_create.go:116] failed to create docker network kubernetes-upgrade-738000 192.168.58.0/24, will retry: subnet is taken
	I0103 12:32:09.667052   20844 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0103 12:32:09.667393   20844 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00234f840}
	I0103 12:32:09.667410   20844 network_create.go:124] attempt to create docker network kubernetes-upgrade-738000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0103 12:32:09.667479   20844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 kubernetes-upgrade-738000
	I0103 12:32:09.756588   20844 network_create.go:108] docker network kubernetes-upgrade-738000 192.168.67.0/24 created
	I0103 12:32:09.756637   20844 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-738000" container
	I0103 12:32:09.756779   20844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 12:32:09.808512   20844 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-738000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 --label created_by.minikube.sigs.k8s.io=true
	I0103 12:32:09.861029   20844 oci.go:103] Successfully created a docker volume kubernetes-upgrade-738000
	I0103 12:32:09.861144   20844 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-738000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 --entrypoint /usr/bin/test -v kubernetes-upgrade-738000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 12:32:10.390426   20844 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-738000
	I0103 12:32:10.390472   20844 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:32:10.390485   20844 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 12:32:10.390576   20844 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-738000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 12:32:12.655193   20844 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-738000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (2.264590698s)
	I0103 12:32:12.655222   20844 kic.go:203] duration metric: took 2.264769 seconds to extract preloaded images to volume
	I0103 12:32:12.655328   20844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 12:32:12.757770   20844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-738000 --name kubernetes-upgrade-738000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-738000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-738000 --network kubernetes-upgrade-738000 --ip 192.168.67.2 --volume kubernetes-upgrade-738000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 12:32:13.036249   20844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Running}}
	I0103 12:32:13.091150   20844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:32:13.146928   20844 cli_runner.go:164] Run: docker exec kubernetes-upgrade-738000 stat /var/lib/dpkg/alternatives/iptables
	I0103 12:32:13.274348   20844 oci.go:144] the created container "kubernetes-upgrade-738000" has a running status.
	I0103 12:32:13.274396   20844 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa...
	I0103 12:32:13.607069   20844 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 12:32:13.667341   20844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:32:13.721863   20844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 12:32:13.721887   20844 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-738000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 12:32:13.817171   20844 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:32:13.869282   20844 machine.go:88] provisioning docker machine ...
	I0103 12:32:13.869328   20844 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-738000"
	I0103 12:32:13.869435   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:13.920410   20844 main.go:141] libmachine: Using SSH client type: native
	I0103 12:32:13.920757   20844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 59786 <nil> <nil>}
	I0103 12:32:13.920770   20844 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-738000 && echo "kubernetes-upgrade-738000" | sudo tee /etc/hostname
	I0103 12:32:14.052219   20844 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-738000
	
	I0103 12:32:14.052311   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:14.104531   20844 main.go:141] libmachine: Using SSH client type: native
	I0103 12:32:14.104834   20844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 59786 <nil> <nil>}
	I0103 12:32:14.104847   20844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-738000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-738000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-738000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:32:14.223864   20844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:32:14.223887   20844 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:32:14.223906   20844 ubuntu.go:177] setting up certificates
	I0103 12:32:14.223923   20844 provision.go:83] configureAuth start
	I0103 12:32:14.223993   20844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738000
	I0103 12:32:14.274947   20844 provision.go:138] copyHostCerts
	I0103 12:32:14.275055   20844 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:32:14.275066   20844 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:32:14.275203   20844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:32:14.275443   20844 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:32:14.275452   20844 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:32:14.275534   20844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:32:14.275727   20844 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:32:14.275733   20844 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:32:14.275812   20844 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:32:14.275964   20844 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-738000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-738000]
	I0103 12:32:14.320203   20844 provision.go:172] copyRemoteCerts
	I0103 12:32:14.320256   20844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:32:14.320304   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:14.371232   20844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59786 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:32:14.458292   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 12:32:14.478870   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:32:14.499169   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0103 12:32:14.520315   20844 provision.go:86] duration metric: configureAuth took 296.38393ms
	I0103 12:32:14.520330   20844 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:32:14.520478   20844 config.go:182] Loaded profile config "kubernetes-upgrade-738000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0103 12:32:14.520544   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:14.572526   20844 main.go:141] libmachine: Using SSH client type: native
	I0103 12:32:14.572839   20844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 59786 <nil> <nil>}
	I0103 12:32:14.572855   20844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:32:14.692985   20844 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:32:14.692998   20844 ubuntu.go:71] root file system type: overlay
	I0103 12:32:14.693092   20844 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:32:14.693174   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:14.745596   20844 main.go:141] libmachine: Using SSH client type: native
	I0103 12:32:14.745901   20844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 59786 <nil> <nil>}
	I0103 12:32:14.745952   20844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:32:14.873413   20844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:32:14.873525   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:14.926591   20844 main.go:141] libmachine: Using SSH client type: native
	I0103 12:32:14.926934   20844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 59786 <nil> <nil>}
	I0103 12:32:14.926948   20844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:32:15.501322   20844 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:32:14.870227881 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0103 12:32:15.501352   20844 machine.go:91] provisioned docker machine in 1.632071613s
	I0103 12:32:15.501369   20844 client.go:171] LocalClient.Create took 6.045465376s
	I0103 12:32:15.501387   20844 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-738000" took 6.04555327s
	I0103 12:32:15.501395   20844 start.go:300] post-start starting for "kubernetes-upgrade-738000" (driver="docker")
	I0103 12:32:15.501405   20844 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:32:15.501482   20844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:32:15.501542   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:15.555873   20844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59786 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:32:15.643907   20844 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:32:15.647761   20844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:32:15.647784   20844 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:32:15.647791   20844 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:32:15.647797   20844 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:32:15.647808   20844 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:32:15.647920   20844 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:32:15.648106   20844 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:32:15.648315   20844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:32:15.656300   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:32:15.676610   20844 start.go:303] post-start completed in 175.206147ms
	I0103 12:32:15.677160   20844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738000
	I0103 12:32:15.729219   20844 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/config.json ...
	I0103 12:32:15.729727   20844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:32:15.729801   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:15.781634   20844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59786 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:32:15.865887   20844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:32:15.870937   20844 start.go:128] duration metric: createHost completed in 6.47307921s
	I0103 12:32:15.870961   20844 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 6.473252599s
	I0103 12:32:15.871040   20844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738000
	I0103 12:32:15.922445   20844 ssh_runner.go:195] Run: cat /version.json
	I0103 12:32:15.922454   20844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:32:15.922526   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:15.922527   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:15.976251   20844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59786 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:32:15.976261   20844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59786 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:32:16.167097   20844 ssh_runner.go:195] Run: systemctl --version
	I0103 12:32:16.172026   20844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 12:32:16.177031   20844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 12:32:16.199625   20844 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 12:32:16.199727   20844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0103 12:32:16.215259   20844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0103 12:32:16.230797   20844 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 12:32:16.230817   20844 start.go:475] detecting cgroup driver to use...
	I0103 12:32:16.230834   20844 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:32:16.230937   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:32:16.245618   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0103 12:32:16.255451   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:32:16.265030   20844 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:32:16.265089   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:32:16.274610   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:32:16.283969   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:32:16.293714   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:32:16.302874   20844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:32:16.311861   20844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:32:16.321336   20844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:32:16.329385   20844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:32:16.337311   20844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:32:16.396022   20844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:32:16.476323   20844 start.go:475] detecting cgroup driver to use...
	I0103 12:32:16.476344   20844 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:32:16.476423   20844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:32:16.493430   20844 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:32:16.493493   20844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:32:16.504990   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:32:16.521308   20844 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:32:16.526167   20844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:32:16.535909   20844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:32:16.553266   20844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:32:16.615749   20844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:32:16.699762   20844 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:32:16.699845   20844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:32:16.716543   20844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:32:16.796080   20844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:32:17.032136   20844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:32:17.074900   20844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:32:17.142254   20844 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0103 12:32:17.142389   20844 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-738000 dig +short host.docker.internal
	I0103 12:32:17.255306   20844 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:32:17.255408   20844 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:32:17.259732   20844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:32:17.270144   20844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:32:17.322118   20844 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:32:17.322199   20844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:32:17.341382   20844 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:32:17.341397   20844 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:32:17.341449   20844 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:32:17.350158   20844 ssh_runner.go:195] Run: which lz4
	I0103 12:32:17.354232   20844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0103 12:32:17.358367   20844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 12:32:17.358395   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0103 12:32:22.392813   20844 docker.go:635] Took 5.038708 seconds to copy over tarball
	I0103 12:32:22.392895   20844 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 12:32:23.928420   20844 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.535528724s)
	I0103 12:32:23.928441   20844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 12:32:23.966875   20844 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:32:23.975239   20844 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0103 12:32:23.990784   20844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:32:24.046111   20844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:32:24.852443   20844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:32:24.875618   20844 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:32:24.875630   20844 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:32:24.875641   20844 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 12:32:24.881550   20844 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 12:32:24.881567   20844 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:32:24.881551   20844 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:32:24.882228   20844 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:32:24.882263   20844 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 12:32:24.882328   20844 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:32:24.882353   20844 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:32:24.882650   20844 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:32:24.887024   20844 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:32:24.887178   20844 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 12:32:24.887331   20844 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:32:24.888328   20844 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 12:32:24.888529   20844 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:32:24.889265   20844 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:32:24.889267   20844 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:32:24.889291   20844 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:32:25.318368   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 12:32:25.336638   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:32:25.340130   20844 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 12:32:25.340216   20844 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0103 12:32:25.340332   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0103 12:32:25.353649   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:32:25.359603   20844 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 12:32:25.359646   20844 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:32:25.359733   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:32:25.364559   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 12:32:25.376548   20844 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 12:32:25.376582   20844 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:32:25.376647   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:32:25.378876   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 12:32:25.385204   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 12:32:25.399448   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 12:32:25.401579   20844 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 12:32:25.401600   20844 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 12:32:25.401666   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0103 12:32:25.420022   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 12:32:25.481180   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:32:25.501658   20844 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 12:32:25.501685   20844 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:32:25.501762   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:32:25.520085   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 12:32:25.521506   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:32:25.541138   20844 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 12:32:25.541166   20844 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:32:25.541228   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:32:25.559084   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 12:32:25.625620   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 12:32:25.646197   20844 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 12:32:25.646222   20844 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:32:25.646286   20844 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0103 12:32:25.666637   20844 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 12:32:25.759218   20844 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:32:25.778830   20844 cache_images.go:92] LoadImages completed in 903.187899ms
	W0103 12:32:25.778891   20844 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0103 12:32:25.778976   20844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:32:25.843696   20844 cni.go:84] Creating CNI manager for ""
	I0103 12:32:25.843714   20844 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:32:25.843726   20844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:32:25.843743   20844 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-738000 NodeName:kubernetes-upgrade-738000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 12:32:25.843848   20844 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-738000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-738000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:32:25.843902   20844 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-738000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:32:25.843962   20844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 12:32:25.853367   20844 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:32:25.853462   20844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:32:25.861758   20844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0103 12:32:25.877274   20844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 12:32:25.892949   20844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0103 12:32:25.908732   20844 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:32:25.913113   20844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:32:25.923631   20844 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000 for IP: 192.168.67.2
	I0103 12:32:25.923652   20844 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:25.923830   20844 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:32:25.923906   20844 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:32:25.923952   20844 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key
	I0103 12:32:25.923964   20844 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt with IP's: []
	I0103 12:32:26.029296   20844 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt ...
	I0103 12:32:26.029311   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt: {Name:mk0e4da23e0c2aa4d39c1c9d58b83204978c2c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.029653   20844 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key ...
	I0103 12:32:26.029662   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key: {Name:mk1e2293dc6a910fcba987aa8c2ddd03186074a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.029876   20844 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key.c7fa3a9e
	I0103 12:32:26.029895   20844 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 12:32:26.122995   20844 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt.c7fa3a9e ...
	I0103 12:32:26.123009   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt.c7fa3a9e: {Name:mkdc48ee5121e0b5a1ff4914d8d1446dd5fe85e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.123329   20844 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key.c7fa3a9e ...
	I0103 12:32:26.123340   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key.c7fa3a9e: {Name:mkd2d07f072d157f808fcc0707b75883b4dbbe71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.123579   20844 certs.go:337] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt
	I0103 12:32:26.123771   20844 certs.go:341] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key
	I0103 12:32:26.123943   20844 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key
	I0103 12:32:26.123959   20844 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.crt with IP's: []
	I0103 12:32:26.209627   20844 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.crt ...
	I0103 12:32:26.209640   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.crt: {Name:mkf6bcfd68c406543a6bf60c0d8850f8895815bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.209918   20844 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key ...
	I0103 12:32:26.209928   20844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key: {Name:mk31dd7bc0a5abf4843838ea8f45f02c530ea656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:32:26.210315   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:32:26.210367   20844 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:32:26.210383   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:32:26.210415   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:32:26.210445   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:32:26.210480   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:32:26.210545   20844 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:32:26.211116   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:32:26.233252   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 12:32:26.254032   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:32:26.274872   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 12:32:26.295796   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:32:26.316129   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:32:26.337243   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:32:26.358333   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:32:26.378698   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:32:26.399335   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:32:26.419817   20844 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:32:26.440490   20844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:32:26.456510   20844 ssh_runner.go:195] Run: openssl version
	I0103 12:32:26.462646   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:32:26.472165   20844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:32:26.476422   20844 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:32:26.476472   20844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:32:26.483347   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:32:26.492401   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:32:26.501253   20844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:32:26.505297   20844 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:32:26.505344   20844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:32:26.511824   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:32:26.520928   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:32:26.529881   20844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:32:26.534044   20844 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:32:26.534091   20844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:32:26.540928   20844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:32:26.549949   20844 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:32:26.553922   20844 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 12:32:26.553968   20844 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:32:26.554075   20844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:32:26.571708   20844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:32:26.580864   20844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:32:26.589339   20844 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:32:26.589397   20844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:32:26.597779   20844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:32:26.597810   20844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:32:26.647218   20844 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:32:26.647263   20844 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:32:26.905960   20844 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:32:26.906043   20844 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:32:26.906140   20844 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:32:27.085787   20844 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:32:27.086464   20844 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:32:27.092316   20844 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:32:27.164693   20844 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:32:27.208656   20844 out.go:204]   - Generating certificates and keys ...
	I0103 12:32:27.208741   20844 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:32:27.208806   20844 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:32:27.356281   20844 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 12:32:27.488430   20844 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 12:32:27.554030   20844 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 12:32:27.699841   20844 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 12:32:28.044855   20844 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 12:32:28.044956   20844 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0103 12:32:28.125595   20844 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 12:32:28.125716   20844 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0103 12:32:28.259807   20844 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 12:32:28.534583   20844 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 12:32:28.686177   20844 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 12:32:28.686238   20844 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:32:28.736860   20844 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:32:28.812797   20844 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:32:28.919786   20844 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:32:28.967885   20844 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:32:28.968377   20844 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:32:28.989807   20844 out.go:204]   - Booting up control plane ...
	I0103 12:32:28.989911   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:32:28.989987   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:32:28.990069   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:32:28.990152   20844 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:32:28.990294   20844 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:33:08.975973   20844 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:33:08.976516   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:33:08.976670   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:33:13.977694   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:33:13.977976   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:33:23.978452   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:33:23.978650   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:33:43.992688   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:33:43.992881   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:34:24.019369   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:34:24.019613   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:34:24.019636   20844 kubeadm.go:322] 
	I0103 12:34:24.019691   20844 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:34:24.019750   20844 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:34:24.019764   20844 kubeadm.go:322] 
	I0103 12:34:24.019806   20844 kubeadm.go:322] This error is likely caused by:
	I0103 12:34:24.019860   20844 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:34:24.019973   20844 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:34:24.019981   20844 kubeadm.go:322] 
	I0103 12:34:24.020152   20844 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:34:24.020200   20844 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:34:24.020242   20844 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:34:24.020251   20844 kubeadm.go:322] 
	I0103 12:34:24.020373   20844 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:34:24.020493   20844 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:34:24.020591   20844 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:34:24.020648   20844 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:34:24.020742   20844 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:34:24.020780   20844 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:34:24.022025   20844 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:34:24.022121   20844 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:34:24.022227   20844 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:34:24.022334   20844 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:34:24.022431   20844 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:34:24.022497   20844 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0103 12:34:24.022571   20844 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-738000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0103 12:34:24.022606   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:34:24.442934   20844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:34:24.453702   20844 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:34:24.453761   20844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:34:24.462069   20844 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:34:24.462104   20844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:34:24.529848   20844 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:34:24.529903   20844 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:34:24.864863   20844 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:34:24.864956   20844 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:34:24.865039   20844 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:34:25.057013   20844 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:34:25.057846   20844 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:34:25.063951   20844 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:34:25.142132   20844 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:34:25.248965   20844 out.go:204]   - Generating certificates and keys ...
	I0103 12:34:25.249051   20844 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:34:25.249168   20844 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:34:25.249268   20844 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:34:25.249324   20844 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:34:25.249405   20844 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:34:25.249468   20844 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:34:25.249566   20844 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:34:25.249633   20844 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:34:25.249719   20844 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:34:25.249820   20844 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:34:25.249886   20844 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:34:25.249986   20844 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:34:25.402700   20844 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:34:25.487665   20844 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:34:25.907319   20844 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:34:26.032899   20844 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:34:26.033442   20844 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:34:26.054896   20844 out.go:204]   - Booting up control plane ...
	I0103 12:34:26.055009   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:34:26.055078   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:34:26.055132   20844 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:34:26.055217   20844 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:34:26.057083   20844 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:35:06.058206   20844 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:35:06.058960   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:35:06.059146   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:35:11.060825   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:35:11.061043   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:35:21.061166   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:35:21.061383   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:35:41.062013   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:35:41.062211   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:36:21.062578   20844 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:36:21.062762   20844 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:36:21.062775   20844 kubeadm.go:322] 
	I0103 12:36:21.062811   20844 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:36:21.062852   20844 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:36:21.062859   20844 kubeadm.go:322] 
	I0103 12:36:21.062900   20844 kubeadm.go:322] This error is likely caused by:
	I0103 12:36:21.062930   20844 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:36:21.063024   20844 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:36:21.063036   20844 kubeadm.go:322] 
	I0103 12:36:21.063132   20844 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:36:21.063162   20844 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:36:21.063190   20844 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:36:21.063195   20844 kubeadm.go:322] 
	I0103 12:36:21.063277   20844 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:36:21.063374   20844 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:36:21.063469   20844 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:36:21.063525   20844 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:36:21.063620   20844 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:36:21.063670   20844 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:36:21.065204   20844 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:36:21.065296   20844 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:36:21.065436   20844 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:36:21.065523   20844 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:36:21.065640   20844 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:36:21.065718   20844 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0103 12:36:21.065785   20844 kubeadm.go:406] StartCluster complete in 3m54.474659068s
	I0103 12:36:21.065872   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:36:21.084470   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.084484   20844 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:36:21.084573   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:36:21.104989   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.105005   20844 logs.go:286] No container was found matching "etcd"
	I0103 12:36:21.105079   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:36:21.123718   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.123734   20844 logs.go:286] No container was found matching "coredns"
	I0103 12:36:21.123818   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:36:21.144952   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.144968   20844 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:36:21.145045   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:36:21.165693   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.165707   20844 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:36:21.165782   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:36:21.184301   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.184315   20844 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:36:21.184389   20844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:36:21.204843   20844 logs.go:284] 0 containers: []
	W0103 12:36:21.204859   20844 logs.go:286] No container was found matching "kindnet"
	I0103 12:36:21.204873   20844 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:36:21.204880   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:36:21.282904   20844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:36:21.282921   20844 logs.go:123] Gathering logs for Docker ...
	I0103 12:36:21.282935   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:36:21.302251   20844 logs.go:123] Gathering logs for container status ...
	I0103 12:36:21.302269   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:36:21.359871   20844 logs.go:123] Gathering logs for kubelet ...
	I0103 12:36:21.359888   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:36:21.400475   20844 logs.go:123] Gathering logs for dmesg ...
	I0103 12:36:21.400491   20844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0103 12:36:21.414710   20844 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0103 12:36:21.414731   20844 out.go:239] * 
	* 
	W0103 12:36:21.414788   20844 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:36:21.414805   20844 out.go:239] * 
	* 
	W0103 12:36:21.415385   20844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:36:21.477909   20844 out.go:177] 
	W0103 12:36:21.521019   20844 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:36:21.521055   20844 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0103 12:36:21.521120   20844 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0103 12:36:21.583950   20844 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-738000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-738000: (1.578725936s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-738000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-738000 status --format={{.Host}}: exit status 7 (112.871386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (4m34.479315107s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-738000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (744.363352ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-738000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-738000
	    minikube start -p kubernetes-upgrade-738000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7380002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-738000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (33.829652773s)
version_upgrade_test.go:292: *** TestKubernetesUpgrade FAILED at 2024-01-03 12:41:32.482037 -0800 PST m=+3081.500746232
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-738000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-738000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a",
	        "Created": "2024-01-03T20:32:12.80889168Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:36:24.61538803Z",
	            "FinishedAt": "2024-01-03T20:36:22.093469225Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a/hosts",
	        "LogPath": "/var/lib/docker/containers/d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a/d49c8bf453fe49590678ff1d7b0352f7c24806a2f72db2c215eb4f9940f4820a-json.log",
	        "Name": "/kubernetes-upgrade-738000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-738000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-738000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e90f996ade6f3e0778b7ed8387aedfb38aa6b3a35039656db1b582f76222a5a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e90f996ade6f3e0778b7ed8387aedfb38aa6b3a35039656db1b582f76222a5a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e90f996ade6f3e0778b7ed8387aedfb38aa6b3a35039656db1b582f76222a5a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e90f996ade6f3e0778b7ed8387aedfb38aa6b3a35039656db1b582f76222a5a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-738000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-738000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-738000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-738000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-738000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "871e385fca3ab1c718120d6780affa0d04558de74927a3c015468f9087ca44c2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60092"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60093"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60091"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/871e385fca3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-738000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d49c8bf453fe",
	                        "kubernetes-upgrade-738000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "689be59d66d670f849b8835b072560ff1224aebb98b1ce2288c6b429ac4347ac",
	                    "EndpointID": "fcba99e7b8074a73bf7dd3905d77fd6b7476fdfb006d55a28280be123cfdc158",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-738000 -n kubernetes-upgrade-738000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-738000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-738000 logs -n 25: (2.969147216s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo ip a s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo ip r s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo iptables-save                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000 sudo cat                | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000 sudo cat                | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000 sudo cat                | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST | 03 Jan 24 12:41 PST |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-236000                         | enable-default-cni-236000 | jenkins | v1.32.0 | 03 Jan 24 12:41 PST |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 12:40:58
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 12:40:58.726014   24139 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:40:58.726327   24139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:40:58.726336   24139 out.go:309] Setting ErrFile to fd 2...
	I0103 12:40:58.726343   24139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:40:58.726662   24139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:40:58.729405   24139 out.go:303] Setting JSON to false
	I0103 12:40:58.754123   24139 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7828,"bootTime":1704306630,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:40:58.754220   24139 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:40:58.777837   24139 out.go:177] * [kubernetes-upgrade-738000] minikube v1.32.0 on Darwin 14.2
	I0103 12:40:58.889616   24139 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:40:58.851729   24139 notify.go:220] Checking for updates...
	I0103 12:40:58.963665   24139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:40:59.021383   24139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:40:59.079597   24139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:40:59.137760   24139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:40:59.181552   24139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:40:57.691458   23990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:40:58.689488   23990 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.394969797s)
	I0103 12:40:58.689534   23990 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0103 12:40:59.238213   23990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.632065789s)
	I0103 12:40:59.238251   23990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.618889996s)
	I0103 12:40:59.238308   23990 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.546847004s)
	I0103 12:40:59.238423   23990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-236000
	I0103 12:40:59.273898   23990 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 12:40:59.219512   24139 config.go:182] Loaded profile config "kubernetes-upgrade-738000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 12:40:59.220292   24139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:40:59.345956   24139 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:40:59.346265   24139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:40:59.474206   24139 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:73 SystemTime:2024-01-03 20:40:59.463930889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:40:59.495961   24139 out.go:177] * Using the docker driver based on existing profile
	I0103 12:40:59.332581   23990 addons.go:508] enable addons completed in 2.219778171s: enabled=[storage-provisioner default-storageclass]
	I0103 12:40:59.342867   23990 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-236000" to be "Ready" ...
	I0103 12:40:59.347435   23990 node_ready.go:49] node "enable-default-cni-236000" has status "Ready":"True"
	I0103 12:40:59.347452   23990 node_ready.go:38] duration metric: took 4.560364ms waiting for node "enable-default-cni-236000" to be "Ready" ...
	I0103 12:40:59.347461   23990 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 12:40:59.359360   23990 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-7vqzq" in "kube-system" namespace to be "Ready" ...
	I0103 12:40:59.542885   24139 start.go:298] selected driver: docker
	I0103 12:40:59.542904   24139 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-738000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:40:59.542971   24139 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:40:59.546164   24139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:40:59.675954   24139 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:73 SystemTime:2024-01-03 20:40:59.662041809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:40:59.676315   24139 cni.go:84] Creating CNI manager for ""
	I0103 12:40:59.676335   24139 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:40:59.676351   24139 start_flags.go:323] config:
	{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:40:59.718849   24139 out.go:177] * Starting control plane node kubernetes-upgrade-738000 in cluster kubernetes-upgrade-738000
	I0103 12:40:59.739821   24139 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:40:59.760986   24139 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:40:59.818737   24139 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 12:40:59.818818   24139 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0103 12:40:59.818825   24139 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:40:59.818847   24139 cache.go:56] Caching tarball of preloaded images
	I0103 12:40:59.819042   24139 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 12:40:59.819061   24139 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0103 12:40:59.819740   24139 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/config.json ...
	I0103 12:40:59.885990   24139 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:40:59.886012   24139 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:40:59.886042   24139 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:40:59.886095   24139 start.go:365] acquiring machines lock for kubernetes-upgrade-738000: {Name:mk8869f3f7d225e1a6198587201403ee92199d84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:40:59.886183   24139 start.go:369] acquired machines lock for "kubernetes-upgrade-738000" in 65.779µs
	I0103 12:40:59.886206   24139 start.go:96] Skipping create...Using existing machine configuration
	I0103 12:40:59.886214   24139 fix.go:54] fixHost starting: 
	I0103 12:40:59.886457   24139 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:40:59.952316   24139 fix.go:102] recreateIfNeeded on kubernetes-upgrade-738000: state=Running err=<nil>
	W0103 12:40:59.952351   24139 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 12:40:59.994837   24139 out.go:177] * Updating the running docker "kubernetes-upgrade-738000" container ...
	I0103 12:41:00.366775   23990 pod_ready.go:92] pod "coredns-5dd5756b68-7vqzq" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:00.366790   23990 pod_ready.go:81] duration metric: took 1.007423789s waiting for pod "coredns-5dd5756b68-7vqzq" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.366797   23990 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-ldxl8" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.369048   23990 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ldxl8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ldxl8" not found
	I0103 12:41:00.369061   23990 pod_ready.go:81] duration metric: took 2.259378ms waiting for pod "coredns-5dd5756b68-ldxl8" in "kube-system" namespace to be "Ready" ...
	E0103 12:41:00.369067   23990 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ldxl8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ldxl8" not found
	I0103 12:41:00.369072   23990 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.375036   23990 pod_ready.go:92] pod "etcd-enable-default-cni-236000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:00.375051   23990 pod_ready.go:81] duration metric: took 5.972841ms waiting for pod "etcd-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.375059   23990 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.380942   23990 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-236000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:00.380954   23990 pod_ready.go:81] duration metric: took 5.889193ms waiting for pod "kube-apiserver-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.380962   23990 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.386355   23990 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-236000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:00.386371   23990 pod_ready.go:81] duration metric: took 5.402693ms waiting for pod "kube-controller-manager-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.386381   23990 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-z6z4z" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.746321   23990 pod_ready.go:92] pod "kube-proxy-z6z4z" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:00.746334   23990 pod_ready.go:81] duration metric: took 359.952887ms waiting for pod "kube-proxy-z6z4z" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:00.746340   23990 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:01.147148   23990 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-236000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:41:01.147162   23990 pod_ready.go:81] duration metric: took 400.821655ms waiting for pod "kube-scheduler-enable-default-cni-236000" in "kube-system" namespace to be "Ready" ...
	I0103 12:41:01.147170   23990 pod_ready.go:38] duration metric: took 1.799708824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 12:41:01.147186   23990 api_server.go:52] waiting for apiserver process to appear ...
	I0103 12:41:01.147248   23990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:01.157608   23990 api_server.go:72] duration metric: took 3.532209657s to wait for apiserver process to appear ...
	I0103 12:41:01.157623   23990 api_server.go:88] waiting for apiserver healthz status ...
	I0103 12:41:01.157646   23990 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60598/healthz ...
	I0103 12:41:01.163575   23990 api_server.go:279] https://127.0.0.1:60598/healthz returned 200:
	ok
	I0103 12:41:01.165154   23990 api_server.go:141] control plane version: v1.28.4
	I0103 12:41:01.165166   23990 api_server.go:131] duration metric: took 7.537602ms to wait for apiserver health ...
	I0103 12:41:01.165172   23990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 12:41:01.348578   23990 system_pods.go:59] 7 kube-system pods found
	I0103 12:41:01.348597   23990 system_pods.go:61] "coredns-5dd5756b68-7vqzq" [d8394780-dc5f-468d-9630-3b7507039b2f] Running
	I0103 12:41:01.348602   23990 system_pods.go:61] "etcd-enable-default-cni-236000" [57879b9b-214b-40ce-b8b0-37e9e9452bf0] Running
	I0103 12:41:01.348605   23990 system_pods.go:61] "kube-apiserver-enable-default-cni-236000" [0cc3c3ad-0406-4564-a66c-1329215089b9] Running
	I0103 12:41:01.348609   23990 system_pods.go:61] "kube-controller-manager-enable-default-cni-236000" [861b73b4-b783-4a23-ac99-9bf95863114e] Running
	I0103 12:41:01.348631   23990 system_pods.go:61] "kube-proxy-z6z4z" [1547bb74-4416-45c8-8953-69c5b9656dd2] Running
	I0103 12:41:01.348638   23990 system_pods.go:61] "kube-scheduler-enable-default-cni-236000" [44ee9877-a429-4ef4-aae2-bb0d05a49933] Running
	I0103 12:41:01.348645   23990 system_pods.go:61] "storage-provisioner" [b248916f-62c9-412e-9f3c-5b4d42f67231] Running
	I0103 12:41:01.348650   23990 system_pods.go:74] duration metric: took 183.474333ms to wait for pod list to return data ...
	I0103 12:41:01.348657   23990 default_sa.go:34] waiting for default service account to be created ...
	I0103 12:41:01.548108   23990 default_sa.go:45] found service account: "default"
	I0103 12:41:01.548123   23990 default_sa.go:55] duration metric: took 199.462763ms for default service account to be created ...
	I0103 12:41:01.548129   23990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 12:41:01.749584   23990 system_pods.go:86] 7 kube-system pods found
	I0103 12:41:01.749603   23990 system_pods.go:89] "coredns-5dd5756b68-7vqzq" [d8394780-dc5f-468d-9630-3b7507039b2f] Running
	I0103 12:41:01.749610   23990 system_pods.go:89] "etcd-enable-default-cni-236000" [57879b9b-214b-40ce-b8b0-37e9e9452bf0] Running
	I0103 12:41:01.749614   23990 system_pods.go:89] "kube-apiserver-enable-default-cni-236000" [0cc3c3ad-0406-4564-a66c-1329215089b9] Running
	I0103 12:41:01.749632   23990 system_pods.go:89] "kube-controller-manager-enable-default-cni-236000" [861b73b4-b783-4a23-ac99-9bf95863114e] Running
	I0103 12:41:01.749642   23990 system_pods.go:89] "kube-proxy-z6z4z" [1547bb74-4416-45c8-8953-69c5b9656dd2] Running
	I0103 12:41:01.749648   23990 system_pods.go:89] "kube-scheduler-enable-default-cni-236000" [44ee9877-a429-4ef4-aae2-bb0d05a49933] Running
	I0103 12:41:01.749652   23990 system_pods.go:89] "storage-provisioner" [b248916f-62c9-412e-9f3c-5b4d42f67231] Running
	I0103 12:41:01.749657   23990 system_pods.go:126] duration metric: took 201.526803ms to wait for k8s-apps to be running ...
	I0103 12:41:01.749666   23990 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 12:41:01.749719   23990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:41:01.760734   23990 system_svc.go:56] duration metric: took 11.066993ms WaitForService to wait for kubelet.
	I0103 12:41:01.760758   23990 kubeadm.go:581] duration metric: took 4.135367546s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 12:41:01.760770   23990 node_conditions.go:102] verifying NodePressure condition ...
	I0103 12:41:01.946386   23990 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 12:41:01.946402   23990 node_conditions.go:123] node cpu capacity is 12
	I0103 12:41:01.946414   23990 node_conditions.go:105] duration metric: took 185.641758ms to run NodePressure ...
	I0103 12:41:01.946425   23990 start.go:228] waiting for startup goroutines ...
	I0103 12:41:01.946437   23990 start.go:233] waiting for cluster config update ...
	I0103 12:41:01.946461   23990 start.go:242] writing updated cluster config ...
	I0103 12:41:01.946784   23990 ssh_runner.go:195] Run: rm -f paused
	I0103 12:41:01.988045   23990 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I0103 12:41:02.010084   23990 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-236000" cluster and "default" namespace by default
	I0103 12:41:00.015716   24139 machine.go:88] provisioning docker machine ...
	I0103 12:41:00.015753   24139 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-738000"
	I0103 12:41:00.015854   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:00.069876   24139 main.go:141] libmachine: Using SSH client type: native
	I0103 12:41:00.070244   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 60092 <nil> <nil>}
	I0103 12:41:00.070257   24139 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-738000 && echo "kubernetes-upgrade-738000" | sudo tee /etc/hostname
	I0103 12:41:00.200857   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-738000
	
	I0103 12:41:00.200955   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:00.254637   24139 main.go:141] libmachine: Using SSH client type: native
	I0103 12:41:00.254934   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 60092 <nil> <nil>}
	I0103 12:41:00.254950   24139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-738000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-738000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-738000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:41:00.372754   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:41:00.372778   24139 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:41:00.372800   24139 ubuntu.go:177] setting up certificates
	I0103 12:41:00.372813   24139 provision.go:83] configureAuth start
	I0103 12:41:00.372894   24139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738000
	I0103 12:41:00.425259   24139 provision.go:138] copyHostCerts
	I0103 12:41:00.425373   24139 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:41:00.425383   24139 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:41:00.425501   24139 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:41:00.425746   24139 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:41:00.425753   24139 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:41:00.425825   24139 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:41:00.426041   24139 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:41:00.426051   24139 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:41:00.426129   24139 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:41:00.426288   24139 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-738000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-738000]
	I0103 12:41:00.598562   24139 provision.go:172] copyRemoteCerts
	I0103 12:41:00.598631   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:41:00.598686   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:00.649900   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:00.736426   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:41:00.757403   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0103 12:41:00.778198   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 12:41:00.800462   24139 provision.go:86] duration metric: configureAuth took 427.639521ms
	I0103 12:41:00.800476   24139 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:41:00.800620   24139 config.go:182] Loaded profile config "kubernetes-upgrade-738000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 12:41:00.800681   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:00.853608   24139 main.go:141] libmachine: Using SSH client type: native
	I0103 12:41:00.853906   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 60092 <nil> <nil>}
	I0103 12:41:00.853924   24139 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:41:00.974265   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:41:00.974277   24139 ubuntu.go:71] root file system type: overlay
	I0103 12:41:00.974355   24139 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:41:00.974441   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.027932   24139 main.go:141] libmachine: Using SSH client type: native
	I0103 12:41:01.028237   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 60092 <nil> <nil>}
	I0103 12:41:01.028286   24139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:41:01.157663   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:41:01.157746   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.211303   24139 main.go:141] libmachine: Using SSH client type: native
	I0103 12:41:01.211608   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 60092 <nil> <nil>}
	I0103 12:41:01.211628   24139 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:41:01.335247   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:41:01.335281   24139 machine.go:91] provisioned docker machine in 1.319555392s
	I0103 12:41:01.335289   24139 start.go:300] post-start starting for "kubernetes-upgrade-738000" (driver="docker")
	I0103 12:41:01.335300   24139 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:41:01.335359   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:41:01.335422   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.388231   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:01.473358   24139 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:41:01.477425   24139 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:41:01.477453   24139 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:41:01.477460   24139 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:41:01.477466   24139 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:41:01.477478   24139 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:41:01.477572   24139 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:41:01.477755   24139 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:41:01.477958   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:41:01.486010   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:41:01.506347   24139 start.go:303] post-start completed in 171.050972ms
	I0103 12:41:01.506418   24139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:41:01.506473   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.559172   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:01.643838   24139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:41:01.648762   24139 fix.go:56] fixHost completed within 1.762569205s
	I0103 12:41:01.648776   24139 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 1.762607354s
	I0103 12:41:01.648855   24139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-738000
	I0103 12:41:01.702482   24139 ssh_runner.go:195] Run: cat /version.json
	I0103 12:41:01.702504   24139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:41:01.702565   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.702609   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:01.777492   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:01.777485   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:01.967449   24139 ssh_runner.go:195] Run: systemctl --version
	I0103 12:41:01.972819   24139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 12:41:01.977965   24139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 12:41:01.978043   24139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0103 12:41:01.987411   24139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0103 12:41:01.996307   24139 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0103 12:41:01.996325   24139 start.go:475] detecting cgroup driver to use...
	I0103 12:41:01.996342   24139 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:41:01.996459   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:41:02.012566   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0103 12:41:02.031031   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:41:02.041523   24139 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:41:02.041606   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:41:02.051464   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:41:02.061892   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:41:02.081769   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:41:02.093340   24139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:41:02.103345   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:41:02.113741   24139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:41:02.123596   24139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:41:02.133196   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:41:02.202074   24139 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:41:12.349947   24139 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.147980464s)
	I0103 12:41:12.349972   24139 start.go:475] detecting cgroup driver to use...
	I0103 12:41:12.349985   24139 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:41:12.350060   24139 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:41:12.363548   24139 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:41:12.363637   24139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:41:12.374858   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:41:12.391746   24139 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:41:12.396547   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:41:12.406426   24139 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:41:12.423707   24139 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:41:12.518998   24139 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:41:12.616111   24139 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:41:12.616204   24139 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:41:12.632588   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:41:12.715759   24139 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:41:13.031418   24139 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 12:41:13.091138   24139 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0103 12:41:13.155752   24139 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 12:41:13.225152   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:41:13.286413   24139 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0103 12:41:13.312089   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:41:13.400156   24139 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0103 12:41:13.501325   24139 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0103 12:41:13.501429   24139 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0103 12:41:13.506325   24139 start.go:543] Will wait 60s for crictl version
	I0103 12:41:13.506393   24139 ssh_runner.go:195] Run: which crictl
	I0103 12:41:13.510766   24139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 12:41:13.561964   24139 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0103 12:41:13.562040   24139 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:41:13.586687   24139 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:41:13.631015   24139 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0103 12:41:13.631131   24139 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-738000 dig +short host.docker.internal
	I0103 12:41:13.751703   24139 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:41:13.751792   24139 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:41:13.756314   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:13.808503   24139 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 12:41:13.808587   24139 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:41:13.828088   24139 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:41:13.828110   24139 docker.go:601] Images already preloaded, skipping extraction
	I0103 12:41:13.828183   24139 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:41:13.848495   24139 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:41:13.848512   24139 cache_images.go:84] Images are preloaded, skipping loading
	I0103 12:41:13.848595   24139 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:41:13.897384   24139 cni.go:84] Creating CNI manager for ""
	I0103 12:41:13.897405   24139 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:41:13.897419   24139 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:41:13.897442   24139 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-738000 NodeName:kubernetes-upgrade-738000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 12:41:13.897566   24139 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-738000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:41:13.897648   24139 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-738000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:41:13.897712   24139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 12:41:13.906605   24139 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:41:13.906661   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:41:13.914870   24139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0103 12:41:13.930650   24139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 12:41:13.946182   24139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0103 12:41:13.961646   24139 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:41:13.966056   24139 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000 for IP: 192.168.67.2
	I0103 12:41:13.966081   24139 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:41:13.966262   24139 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:41:13.966331   24139 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:41:13.966429   24139 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key
	I0103 12:41:13.966515   24139 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key.c7fa3a9e
	I0103 12:41:13.966584   24139 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key
	I0103 12:41:13.966804   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:41:13.966878   24139 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:41:13.966892   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:41:13.966931   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:41:13.966967   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:41:13.966996   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:41:13.967070   24139 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:41:13.967651   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:41:13.988323   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 12:41:14.008896   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:41:14.029410   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 12:41:14.049769   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:41:14.070295   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:41:14.090899   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:41:14.111376   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:41:14.132700   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:41:14.153875   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:41:14.175113   24139 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:41:14.195610   24139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:41:14.211510   24139 ssh_runner.go:195] Run: openssl version
	I0103 12:41:14.217270   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:41:14.226626   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:41:14.230889   24139 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:41:14.230949   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:41:14.237810   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:41:14.246576   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:41:14.255434   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:41:14.259496   24139 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:41:14.259544   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:41:14.266233   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:41:14.274747   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:41:14.283784   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:41:14.288279   24139 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:41:14.288336   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:41:14.294818   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:41:14.303113   24139 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:41:14.307258   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 12:41:14.313747   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 12:41:14.320068   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 12:41:14.326416   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 12:41:14.332779   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 12:41:14.339166   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 12:41:14.345688   24139 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:41:14.345807   24139 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:41:14.364010   24139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:41:14.372608   24139 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 12:41:14.372625   24139 kubeadm.go:636] restartCluster start
	I0103 12:41:14.372678   24139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 12:41:14.380787   24139 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:14.380880   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:14.433435   24139 kubeconfig.go:92] found "kubernetes-upgrade-738000" server: "https://127.0.0.1:60091"
	I0103 12:41:14.434183   24139 kapi.go:59] client config for kubernetes-upgrade-738000: &rest.Config{Host:"https://127.0.0.1:60091", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27eefa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 12:41:14.434861   24139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 12:41:14.443730   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:14.443785   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:14.453137   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:14.943791   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:14.943895   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:14.955400   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:15.444001   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:15.444171   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:15.455463   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:15.944969   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:15.945119   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:15.956862   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:16.444157   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:16.444287   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:16.455757   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:16.944669   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:16.944787   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:16.954732   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:17.444208   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:17.444293   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:17.455108   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:17.944229   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:17.944333   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:17.955671   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:18.443945   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:18.444047   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:41:18.493641   24139 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:18.943782   24139 api_server.go:166] Checking apiserver status ...
	I0103 12:41:18.943870   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:18.993396   24139 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14341/cgroup
	W0103 12:41:19.008222   24139 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14341/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:19.008294   24139 ssh_runner.go:195] Run: ls
	I0103 12:41:19.014897   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:22.285023   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 12:41:22.285069   24139 retry.go:31] will retry after 298.117459ms: https://127.0.0.1:60091/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 12:41:22.583791   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:22.590686   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:22.590715   24139 retry.go:31] will retry after 268.534607ms: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:22.859327   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:22.864760   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:22.864783   24139 retry.go:31] will retry after 347.160729ms: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:23.212359   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:23.217292   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:23.217314   24139 retry.go:31] will retry after 515.734996ms: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:23.733805   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:23.754605   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 200:
	ok
	I0103 12:41:23.767375   24139 system_pods.go:86] 5 kube-system pods found
	I0103 12:41:23.767396   24139 system_pods.go:89] "etcd-kubernetes-upgrade-738000" [4563cff0-1407-4b80-842a-e17fdc73da5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 12:41:23.767406   24139 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-738000" [5aa49fc3-52e6-4d43-8a08-a9f02847c7e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 12:41:23.767417   24139 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-738000" [3ef681ea-54fa-4f2b-a31f-c1c23d69b83b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 12:41:23.767423   24139 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-738000" [96df7188-50bd-4e86-9ed2-edaa056d9b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 12:41:23.767428   24139 system_pods.go:89] "storage-provisioner" [ee766b3c-a15d-4d18-94b1-757246dbfc26] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0103 12:41:23.767435   24139 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0103 12:41:23.767443   24139 kubeadm.go:1135] stopping kube-system containers ...
	I0103 12:41:23.767510   24139 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:41:23.789407   24139 docker.go:469] Stopping containers: [ebc7b24efc66 f03da69a876f c8e75aeb9a4c b16442661562 f564f2dc0965 197ce05ba15c adc4a14d9e17 3d6a581b3a13 6b8f63530f23 f65c5caf4b9f cdf0e478d474 7975dc153d1c c3772f7a62ab 9632d7d1b21f c50fb39447c0 b8d2617e6247]
	I0103 12:41:23.789489   24139 ssh_runner.go:195] Run: docker stop ebc7b24efc66 f03da69a876f c8e75aeb9a4c b16442661562 f564f2dc0965 197ce05ba15c adc4a14d9e17 3d6a581b3a13 6b8f63530f23 f65c5caf4b9f cdf0e478d474 7975dc153d1c c3772f7a62ab 9632d7d1b21f c50fb39447c0 b8d2617e6247
	I0103 12:41:24.509906   24139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 12:41:24.547423   24139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:41:24.584187   24139 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5647 Jan  3 20:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  3 20:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan  3 20:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  3 20:40 /etc/kubernetes/scheduler.conf
	
	I0103 12:41:24.584308   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0103 12:41:24.599819   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0103 12:41:24.616796   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0103 12:41:24.692662   24139 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:24.692735   24139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0103 12:41:24.704274   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0103 12:41:24.718145   24139 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:41:24.718219   24139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0103 12:41:24.791446   24139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:41:24.806634   24139 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 12:41:24.806673   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:24.917414   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:25.771355   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:25.921869   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:25.977659   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:26.038122   24139 api_server.go:52] waiting for apiserver process to appear ...
	I0103 12:41:26.038199   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:26.538303   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:27.038296   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:27.094126   24139 api_server.go:72] duration metric: took 1.056017156s to wait for apiserver process to appear ...
	I0103 12:41:27.094143   24139 api_server.go:88] waiting for apiserver healthz status ...
	I0103 12:41:27.094167   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:27.095760   24139 api_server.go:269] stopped: https://127.0.0.1:60091/healthz: Get "https://127.0.0.1:60091/healthz": EOF
	I0103 12:41:27.594724   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:29.441983   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 12:41:29.441999   24139 api_server.go:103] status: https://127.0.0.1:60091/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 12:41:29.442008   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:29.487370   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 12:41:29.487390   24139 api_server.go:103] status: https://127.0.0.1:60091/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 12:41:29.594781   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:29.603504   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:41:29.603532   24139 api_server.go:103] status: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:30.094764   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:30.100176   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:41:30.100188   24139 api_server.go:103] status: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:30.594181   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:30.602096   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:41:30.602123   24139 api_server.go:103] status: https://127.0.0.1:60091/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:41:31.095352   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:31.100425   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 200:
	ok
	I0103 12:41:31.107143   24139 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 12:41:31.107160   24139 api_server.go:131] duration metric: took 4.013063467s to wait for apiserver health ...
	I0103 12:41:31.107166   24139 cni.go:84] Creating CNI manager for ""
	I0103 12:41:31.107174   24139 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:41:31.127953   24139 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 12:41:31.148627   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 12:41:31.159454   24139 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 12:41:31.176212   24139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 12:41:31.182513   24139 system_pods.go:59] 5 kube-system pods found
	I0103 12:41:31.182531   24139 system_pods.go:61] "etcd-kubernetes-upgrade-738000" [4563cff0-1407-4b80-842a-e17fdc73da5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 12:41:31.182537   24139 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-738000" [5aa49fc3-52e6-4d43-8a08-a9f02847c7e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 12:41:31.182545   24139 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-738000" [3ef681ea-54fa-4f2b-a31f-c1c23d69b83b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 12:41:31.182552   24139 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-738000" [96df7188-50bd-4e86-9ed2-edaa056d9b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 12:41:31.182557   24139 system_pods.go:61] "storage-provisioner" [ee766b3c-a15d-4d18-94b1-757246dbfc26] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0103 12:41:31.182569   24139 system_pods.go:74] duration metric: took 6.343986ms to wait for pod list to return data ...
	I0103 12:41:31.182578   24139 node_conditions.go:102] verifying NodePressure condition ...
	I0103 12:41:31.186147   24139 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 12:41:31.186164   24139 node_conditions.go:123] node cpu capacity is 12
	I0103 12:41:31.186182   24139 node_conditions.go:105] duration metric: took 3.591554ms to run NodePressure ...
	I0103 12:41:31.186199   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:41:31.445378   24139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 12:41:31.453446   24139 ops.go:34] apiserver oom_adj: -16
	I0103 12:41:31.453467   24139 kubeadm.go:640] restartCluster took 17.081046962s
	I0103 12:41:31.453482   24139 kubeadm.go:406] StartCluster complete in 17.108016718s
	I0103 12:41:31.453500   24139 settings.go:142] acquiring lock: {Name:mk777823310df39752595be0f41f425a2c8eb047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:41:31.453588   24139 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:41:31.454383   24139 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:41:31.454698   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 12:41:31.454759   24139 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 12:41:31.454821   24139 config.go:182] Loaded profile config "kubernetes-upgrade-738000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 12:41:31.454827   24139 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-738000"
	I0103 12:41:31.454831   24139 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-738000"
	I0103 12:41:31.454842   24139 addons.go:237] Setting addon storage-provisioner=true in "kubernetes-upgrade-738000"
	I0103 12:41:31.454844   24139 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-738000"
	W0103 12:41:31.454848   24139 addons.go:246] addon storage-provisioner should already be in state true
	I0103 12:41:31.454887   24139 host.go:66] Checking if "kubernetes-upgrade-738000" exists ...
	I0103 12:41:31.455086   24139 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:41:31.456079   24139 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:41:31.456020   24139 kapi.go:59] client config for kubernetes-upgrade-738000: &rest.Config{Host:"https://127.0.0.1:60091", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27eefa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 12:41:31.462981   24139 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-738000" context rescaled to 1 replicas
	I0103 12:41:31.463019   24139 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 12:41:31.483674   24139 out.go:177] * Verifying Kubernetes components...
	I0103 12:41:31.525828   24139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:41:31.560693   24139 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:41:31.537510   24139 kapi.go:59] client config for kubernetes-upgrade-738000: &rest.Config{Host:"https://127.0.0.1:60091", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubernetes-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27eefa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 12:41:31.540612   24139 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 12:41:31.543423   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:31.560967   24139 addons.go:237] Setting addon default-storageclass=true in "kubernetes-upgrade-738000"
	W0103 12:41:31.583832   24139 addons.go:246] addon default-storageclass should already be in state true
	I0103 12:41:31.583877   24139 host.go:66] Checking if "kubernetes-upgrade-738000" exists ...
	I0103 12:41:31.583904   24139 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 12:41:31.583918   24139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 12:41:31.583994   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:31.586233   24139 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-738000 --format={{.State.Status}}
	I0103 12:41:31.669171   24139 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 12:41:31.669197   24139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 12:41:31.669319   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-738000
	I0103 12:41:31.669463   24139 api_server.go:52] waiting for apiserver process to appear ...
	I0103 12:41:31.669584   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:41:31.670635   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:31.692004   24139 api_server.go:72] duration metric: took 228.936755ms to wait for apiserver process to appear ...
	I0103 12:41:31.692026   24139 api_server.go:88] waiting for apiserver healthz status ...
	I0103 12:41:31.692042   24139 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60091/healthz ...
	I0103 12:41:31.699705   24139 api_server.go:279] https://127.0.0.1:60091/healthz returned 200:
	ok
	I0103 12:41:31.702657   24139 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 12:41:31.702687   24139 api_server.go:131] duration metric: took 10.652316ms to wait for apiserver health ...
	I0103 12:41:31.702728   24139 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 12:41:31.709752   24139 system_pods.go:59] 5 kube-system pods found
	I0103 12:41:31.709771   24139 system_pods.go:61] "etcd-kubernetes-upgrade-738000" [4563cff0-1407-4b80-842a-e17fdc73da5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 12:41:31.709790   24139 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-738000" [5aa49fc3-52e6-4d43-8a08-a9f02847c7e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 12:41:31.709798   24139 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-738000" [3ef681ea-54fa-4f2b-a31f-c1c23d69b83b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 12:41:31.709814   24139 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-738000" [96df7188-50bd-4e86-9ed2-edaa056d9b7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 12:41:31.709821   24139 system_pods.go:61] "storage-provisioner" [ee766b3c-a15d-4d18-94b1-757246dbfc26] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0103 12:41:31.709828   24139 system_pods.go:74] duration metric: took 7.091224ms to wait for pod list to return data ...
	I0103 12:41:31.709837   24139 kubeadm.go:581] duration metric: took 246.792628ms to wait for : map[apiserver:true system_pods:true] ...
	I0103 12:41:31.709850   24139 node_conditions.go:102] verifying NodePressure condition ...
	I0103 12:41:31.714233   24139 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 12:41:31.714249   24139 node_conditions.go:123] node cpu capacity is 12
	I0103 12:41:31.714264   24139 node_conditions.go:105] duration metric: took 4.409083ms to run NodePressure ...
	I0103 12:41:31.714273   24139 start.go:228] waiting for startup goroutines ...
	I0103 12:41:31.740836   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60092 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/kubernetes-upgrade-738000/id_rsa Username:docker}
	I0103 12:41:31.781293   24139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 12:41:31.839591   24139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 12:41:32.330881   24139 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 12:41:32.352025   24139 addons.go:508] enable addons completed in 897.290235ms: enabled=[storage-provisioner default-storageclass]
	I0103 12:41:32.352066   24139 start.go:233] waiting for cluster config update ...
	I0103 12:41:32.352078   24139 start.go:242] writing updated cluster config ...
	I0103 12:41:32.352404   24139 ssh_runner.go:195] Run: rm -f paused
	I0103 12:41:32.395952   24139 start.go:600] kubectl: 1.28.2, cluster: 1.29.0-rc.2 (minor skew: 1)
	I0103 12:41:32.426746   24139 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-738000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 03 20:41:13 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:13Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jan 03 20:41:13 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 03 20:41:13 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:13Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 03 20:41:13 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:13Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 03 20:41:13 kubernetes-upgrade-738000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jan 03 20:41:18 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f564f2dc0965dd9684ed92a2fe2aa1e9f4a645f14529544ef6e2e9826508e5ab/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:18 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d6a581b3a13a5d20243c586af9202036f07327af2695113428861be335b43de/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:18 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/197ce05ba15c0812e03958d3e505e32f2ee2b8bbedc56a333ec165a410382d92/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:18 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/adc4a14d9e17c4808131f6f60046ef6a48a1c81f006a4c90412e1dcc7ec81576/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.898322847Z" level=info msg="ignoring event" container=f564f2dc0965dd9684ed92a2fe2aa1e9f4a645f14529544ef6e2e9826508e5ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.898372800Z" level=info msg="ignoring event" container=adc4a14d9e17c4808131f6f60046ef6a48a1c81f006a4c90412e1dcc7ec81576 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.904923282Z" level=info msg="ignoring event" container=3d6a581b3a13a5d20243c586af9202036f07327af2695113428861be335b43de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.905231058Z" level=info msg="ignoring event" container=c8e75aeb9a4c457d0b1d2c261d8f9c829e72be7e10f6d96d754a8b9ad26e14e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.984564811Z" level=info msg="ignoring event" container=197ce05ba15c0812e03958d3e505e32f2ee2b8bbedc56a333ec165a410382d92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.984742011Z" level=info msg="ignoring event" container=f03da69a876ff8385b23eb169f397652e1f409f43accc25a3d7719950abb640e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:23 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:23.996242331Z" level=info msg="ignoring event" container=b164426615623f75a6eaedbcfeaa56ae37e6db599db9bc7a178f862d17e10bdf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:24 kubernetes-upgrade-738000 dockerd[13351]: time="2024-01-03T20:41:24.463427239Z" level=info msg="ignoring event" container=ebc7b24efc66560079c1abe646a8d8b199a23063359ff572aebdc0709f5d2cef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1a0fcf70062e41327d945f266b84f1d80e191760ae1bf1312671c1f8512c105/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: W0103 20:41:24.798250   13641 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3610641c34a686fc0d04b7295f9680c63ba572da03ab064d1eccccaa92249174/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: W0103 20:41:24.800172   13641 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa4e630f3fd67dd037c4b64bf12b8f61cab4b5f87685df29f47cb7711e60395a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: W0103 20:41:24.822845   13641 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: time="2024-01-03T20:41:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c82a4c2405907985ef36189f7591fb7f2cb63314636599fc4b45711bcc61fd25/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 03 20:41:24 kubernetes-upgrade-738000 cri-dockerd[13641]: W0103 20:41:24.885473   13641 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	673ad5b79cb55       a0eed15eed449       8 seconds ago       Running             etcd                      2                   b1a0fcf70062e       etcd-kubernetes-upgrade-738000
	fe7d0a0372318       4270645ed6b7a       8 seconds ago       Running             kube-scheduler            2                   3610641c34a68       kube-scheduler-kubernetes-upgrade-738000
	ace76cf587749       d4e01cdf63970       8 seconds ago       Running             kube-controller-manager   2                   fa4e630f3fd67       kube-controller-manager-kubernetes-upgrade-738000
	6739dcc130eb6       bbb47a0f83324       8 seconds ago       Running             kube-apiserver            2                   c82a4c2405907       kube-apiserver-kubernetes-upgrade-738000
	ebc7b24efc665       bbb47a0f83324       16 seconds ago      Exited              kube-apiserver            1                   adc4a14d9e17c       kube-apiserver-kubernetes-upgrade-738000
	f03da69a876ff       d4e01cdf63970       16 seconds ago      Exited              kube-controller-manager   1                   f564f2dc0965d       kube-controller-manager-kubernetes-upgrade-738000
	c8e75aeb9a4c4       4270645ed6b7a       16 seconds ago      Exited              kube-scheduler            1                   3d6a581b3a13a       kube-scheduler-kubernetes-upgrade-738000
	b164426615623       a0eed15eed449       16 seconds ago      Exited              etcd                      1                   197ce05ba15c0       etcd-kubernetes-upgrade-738000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-738000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-738000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=kubernetes-upgrade-738000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T12_40_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:40:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-738000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:41:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:41:29 +0000   Wed, 03 Jan 2024 20:40:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:41:29 +0000   Wed, 03 Jan 2024 20:40:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:41:29 +0000   Wed, 03 Jan 2024 20:40:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:41:29 +0000   Wed, 03 Jan 2024 20:41:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-738000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  115273188Ki
	  memory:             6075464Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  115273188Ki
	  memory:             6075464Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccc680caba9a49079ecb5acbe21104d6
	  System UUID:                ccc680caba9a49079ecb5acbe21104d6
	  Boot ID:                    7299d595-62ce-46e7-b090-3992b1c02cb7
	  Kernel Version:             6.5.11-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-738000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kube-apiserver-kubernetes-upgrade-738000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-738000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-scheduler-kubernetes-upgrade-738000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 45s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 39s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  38s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  38s                kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                33s                kubelet  Node kubernetes-upgrade-738000 status is now: NodeReady
	  Normal  NodeNotReady             28s                kubelet  Node kubernetes-upgrade-738000 status is now: NodeNotReady
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-738000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Jan 3 20:28] hrtimer: interrupt took 2402524 ns
	
	
	==> etcd [673ad5b79cb5] <==
	{"level":"info","ts":"2024-01-03T20:41:27.086407Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T20:41:27.086454Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-03T20:41:27.08655Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:41:27.086582Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:41:27.086589Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:41:27.086719Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-03T20:41:27.086725Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-03T20:41:27.08752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-01-03T20:41:27.08755Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-01-03T20:41:27.087603Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:41:27.087629Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:41:28.306989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:28.30704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:28.307114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:28.307125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-01-03T20:41:28.307129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-01-03T20:41:28.307135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-01-03T20:41:28.307141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-01-03T20:41:28.308479Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-738000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:41:28.308503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:41:28.308666Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:41:28.308893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:41:28.308963Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:41:28.312886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-01-03T20:41:28.314181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b16442661562] <==
	{"level":"info","ts":"2024-01-03T20:41:18.900741Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-03T20:41:20.785124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T20:41:20.785177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T20:41:20.785199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-01-03T20:41:20.785207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:20.785211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:20.785217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:20.785226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-01-03T20:41:20.78636Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-738000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:41:20.786407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:41:20.786459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:41:20.786686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:41:20.786751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:41:20.789027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-01-03T20:41:20.791337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:41:23.826414Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-03T20:41:23.826465Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-738000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-01-03T20:41:23.826545Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:41:23.826613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:41:23.889884Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:41:23.890092Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-03T20:41:23.891638Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-01-03T20:41:23.894359Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-03T20:41:23.89448Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-03T20:41:23.894499Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-738000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> kernel <==
	 20:41:34 up  1:39,  0 users,  load average: 2.33, 1.67, 1.26
	Linux kubernetes-upgrade-738000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [6739dcc130eb] <==
	I0103 20:41:29.420552       1 establishing_controller.go:76] Starting EstablishingController
	I0103 20:41:29.420568       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0103 20:41:29.420579       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0103 20:41:29.420595       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0103 20:41:29.420621       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0103 20:41:29.420715       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0103 20:41:29.583722       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 20:41:29.604790       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 20:41:29.617673       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 20:41:29.617729       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 20:41:29.617815       1 aggregator.go:165] initial CRD sync complete...
	I0103 20:41:29.617832       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 20:41:29.617840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 20:41:29.617847       1 cache.go:39] Caches are synced for autoregister controller
	I0103 20:41:29.618043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 20:41:29.619823       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 20:41:29.619917       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0103 20:41:29.619924       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0103 20:41:29.621331       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 20:41:30.423801       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 20:41:31.275221       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 20:41:31.284689       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 20:41:31.311371       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 20:41:31.333910       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 20:41:31.340817       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ebc7b24efc66] <==
	W0103 20:41:23.830591       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830609       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830703       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830744       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830751       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830766       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830785       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830795       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830824       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830827       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830844       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830866       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830886       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830903       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830922       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830964       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830976       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.830964       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831025       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831051       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831082       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831101       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831113       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.831257       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0103 20:41:23.885896       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ace76cf58774] <==
	I0103 20:41:31.635589       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0103 20:41:31.635749       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0103 20:41:31.635771       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0103 20:41:31.648937       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0103 20:41:31.649092       1 disruption.go:433] "Sending events to api server."
	I0103 20:41:31.649394       1 disruption.go:444] "Starting disruption controller"
	I0103 20:41:31.649510       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0103 20:41:31.661218       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0103 20:41:31.661311       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0103 20:41:31.661341       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0103 20:41:31.662535       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0103 20:41:31.662589       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0103 20:41:31.662618       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0103 20:41:31.663487       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0103 20:41:31.663583       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0103 20:41:31.663615       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0103 20:41:31.664734       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0103 20:41:31.665062       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0103 20:41:31.665325       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0103 20:41:31.665201       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0103 20:41:31.679329       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0103 20:41:31.679466       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0103 20:41:31.679473       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0103 20:41:31.679478       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0103 20:41:31.691431       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [f03da69a876f] <==
	I0103 20:41:19.484743       1 serving.go:380] Generated self-signed cert in-memory
	I0103 20:41:19.737456       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0103 20:41:19.737497       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:41:19.738538       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0103 20:41:19.738732       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0103 20:41:19.739115       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0103 20:41:19.739224       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-scheduler [c8e75aeb9a4c] <==
	I0103 20:41:19.723453       1 serving.go:380] Generated self-signed cert in-memory
	W0103 20:41:22.290566       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:41:22.290715       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:41:22.290745       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:41:22.290871       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:41:22.303043       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0103 20:41:22.303113       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:41:22.304612       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:41:22.304658       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:41:22.304952       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:41:22.305118       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:41:22.405342       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:41:23.824292       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0103 20:41:23.825353       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0103 20:41:23.825848       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0103 20:41:23.827073       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fe7d0a037231] <==
	I0103 20:41:28.253660       1 serving.go:380] Generated self-signed cert in-memory
	W0103 20:41:29.584474       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:41:29.584716       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:41:29.584934       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:41:29.585000       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:41:29.595821       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0103 20:41:29.595864       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:41:29.597404       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:41:29.597441       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:41:29.597743       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:41:29.597772       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:41:29.698275       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485109   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e400ea3fd3858f3a5df198b2fc7ade6-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-738000\" (UID: \"3e400ea3fd3858f3a5df198b2fc7ade6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485141   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3435326f0d47db6fe3123634e5a59871-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-738000\" (UID: \"3435326f0d47db6fe3123634e5a59871\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485172   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/6648727e0ea8f5448778022b97f6fe5c-etcd-data\") pod \"etcd-kubernetes-upgrade-738000\" (UID: \"6648727e0ea8f5448778022b97f6fe5c\") " pod="kube-system/etcd-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485202   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e400ea3fd3858f3a5df198b2fc7ade6-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-738000\" (UID: \"3e400ea3fd3858f3a5df198b2fc7ade6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485359   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e400ea3fd3858f3a5df198b2fc7ade6-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-738000\" (UID: \"3e400ea3fd3858f3a5df198b2fc7ade6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485388   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e400ea3fd3858f3a5df198b2fc7ade6-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-738000\" (UID: \"3e400ea3fd3858f3a5df198b2fc7ade6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485412   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3435326f0d47db6fe3123634e5a59871-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-738000\" (UID: \"3435326f0d47db6fe3123634e5a59871\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485464   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3435326f0d47db6fe3123634e5a59871-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-738000\" (UID: \"3435326f0d47db6fe3123634e5a59871\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485558   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/6648727e0ea8f5448778022b97f6fe5c-etcd-certs\") pod \"etcd-kubernetes-upgrade-738000\" (UID: \"6648727e0ea8f5448778022b97f6fe5c\") " pod="kube-system/etcd-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485587   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e400ea3fd3858f3a5df198b2fc7ade6-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-738000\" (UID: \"3e400ea3fd3858f3a5df198b2fc7ade6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485627   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3435326f0d47db6fe3123634e5a59871-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-738000\" (UID: \"3435326f0d47db6fe3123634e5a59871\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485667   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8f1b7f6b1d7f47fb06baceeb2fc6de5-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-738000\" (UID: \"e8f1b7f6b1d7f47fb06baceeb2fc6de5\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.485720   14927 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3435326f0d47db6fe3123634e5a59871-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-738000\" (UID: \"3435326f0d47db6fe3123634e5a59871\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: E0103 20:41:26.684196   14927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-738000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.713972   14927 scope.go:117] "RemoveContainer" containerID="ebc7b24efc66560079c1abe646a8d8b199a23063359ff572aebdc0709f5d2cef"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.721520   14927 scope.go:117] "RemoveContainer" containerID="f03da69a876ff8385b23eb169f397652e1f409f43accc25a3d7719950abb640e"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.729472   14927 scope.go:117] "RemoveContainer" containerID="c8e75aeb9a4c457d0b1d2c261d8f9c829e72be7e10f6d96d754a8b9ad26e14e2"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.736258   14927 scope.go:117] "RemoveContainer" containerID="b164426615623f75a6eaedbcfeaa56ae37e6db599db9bc7a178f862d17e10bdf"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:26.813597   14927 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-738000"
	Jan 03 20:41:26 kubernetes-upgrade-738000 kubelet[14927]: E0103 20:41:26.814123   14927 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-738000"
	Jan 03 20:41:27 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:27.624272   14927 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-738000"
	Jan 03 20:41:29 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:29.689768   14927 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-738000"
	Jan 03 20:41:29 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:29.689849   14927 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-738000"
	Jan 03 20:41:30 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:30.082761   14927 apiserver.go:52] "Watching apiserver"
	Jan 03 20:41:30 kubernetes-upgrade-738000 kubelet[14927]: I0103 20:41:30.182857   14927 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-738000 -n kubernetes-upgrade-738000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-738000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-738000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-738000 describe pod storage-provisioner: exit status 1 (58.524808ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-738000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-738000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-738000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-738000: (2.867837177s)
--- FAIL: TestKubernetesUpgrade (570.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (43.66s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker : exit status 70 (30.67620127s)

                                                
                                                
-- stdout --
	* [missing-upgrade-044000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:31:40.721427921 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "missing-upgrade-044000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:31:54.253427792 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p missing-upgrade-044000", then "minikube start -p missing-upgrade-044000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 184.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 367.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 387.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 477.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:31:54.253427792 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker : exit status 70 (3.930963453s)

                                                
                                                
-- stdout --
	* [missing-upgrade-044000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-044000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3476602596.exe start -p missing-upgrade-044000 --memory=2200 --driver=docker : exit status 70 (3.842509676s)

                                                
                                                
-- stdout --
	* [missing-upgrade-044000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-044000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:328: release start failed: exit status 70
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2024-01-03 12:32:05.9606 -0800 PST m=+2515.012217286
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-044000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-044000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de",
	        "Created": "2024-01-03T20:31:48.885269591Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:31:49.082399284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de/hosts",
	        "LogPath": "/var/lib/docker/containers/ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de/ea593b820edc47b7d2a82479e5d42d0fae639f3c8b6860ddb012379a8ff3c2de-json.log",
	        "Name": "/missing-upgrade-044000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-044000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/001fc4be8879ef797adcfd24876116025c7ce970e13ebaa86fd0361f105a3709-init/diff:/var/lib/docker/overlay2/7c1058393de05d57474dbcbd37c6ec709fbf33413894282a82077ef58fc6e698/diff:/var/lib/docker/overlay2/7dda4c9c0be5066e2ddf4e1e6242f63e77b78758b4834e1f63a750b14fbcc3fc/diff:/var/lib/docker/overlay2/280e807d2b1fd4ae6a18a171facb2d484076c4a0c1689f7626b27fb9a1920edc/diff:/var/lib/docker/overlay2/b304d7dac9ccbce70c1e2360b60e3dbb8f440d239d2c6d19c36a98f0f3ef93f6/diff:/var/lib/docker/overlay2/fda9bc765860b25ccfb6eb7b8571899e1f2ffd356e1194757ddc92e08749350e/diff:/var/lib/docker/overlay2/739a478ee396ce7662c29a7a292c715f4e9950ca4df160b543591a5cd3f2f991/diff:/var/lib/docker/overlay2/3c0f90e74fe176f514fdc0f57219012eed5ecfc3447df8c8c62677d03d138137/diff:/var/lib/docker/overlay2/98da65a42bcba99ba48f3e017b0ae50f5090bb3d14eb6bc0e68a3ce86c428add/diff:/var/lib/docker/overlay2/9299b2e2763b5eed5cffcf0d8be6a4006b334c6330b526b1d079b29a740eeb32/diff:/var/lib/docker/overlay2/3ebe52
1db2799715ea3f9b8f112788be312c6ea9f635bdf480aa11b2004b547b/diff:/var/lib/docker/overlay2/9b7e180a63cf14cb532c3673d813b37898abe62dd2bad4e0e92110d8610ec0f8/diff:/var/lib/docker/overlay2/ddf6f44bbb344c1e6a8334c6c9455eb5dfc26b41c8c8e6b02b753d6d6fe94e9f/diff:/var/lib/docker/overlay2/aa1c1a3edc77ab2fbbf17591e24f5a8d150bb589c1d7fbff7c92c8bac9ec86be/diff:/var/lib/docker/overlay2/3d23b5bc6d406820c1ab948362dfaf5e78f123d20b83ec8f8188371597a551e5/diff:/var/lib/docker/overlay2/4ce0c817f78b2c368c8e1a4165d97a417c85e82c84f76c7aa26ab307e79a07e7/diff:/var/lib/docker/overlay2/4733545d684690c16562ec8430aaf0c9c11d6ca0182484521c8dcfe01a712469/diff:/var/lib/docker/overlay2/ae33f553fbffcf84515eb8f460e586c2fab605eb2e5fac70cf9dc4c0a5d2c5f5/diff:/var/lib/docker/overlay2/bd519fcfb45a1d5babe79a9d7de0c3e41afdceae533bf99fc6efbd7243735acb/diff:/var/lib/docker/overlay2/7dc00b67b14575632e30faf9b738ddbc8047d2d2b0f3193df96dac7ecaa9498c/diff:/var/lib/docker/overlay2/b36c418a5162f80076f606a888e61689e66c635505ce213c8f4fbebb37e75e46/diff:/var/lib/d
ocker/overlay2/a89c18d13f8d0ef6346597a5bc6f50c7cbf664d26750fda226c75dd348d533ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/001fc4be8879ef797adcfd24876116025c7ce970e13ebaa86fd0361f105a3709/merged",
	                "UpperDir": "/var/lib/docker/overlay2/001fc4be8879ef797adcfd24876116025c7ce970e13ebaa86fd0361f105a3709/diff",
	                "WorkDir": "/var/lib/docker/overlay2/001fc4be8879ef797adcfd24876116025c7ce970e13ebaa86fd0361f105a3709/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-044000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-044000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-044000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-044000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-044000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93565d65497900b0af39b2e61c5806f5b6b7cbd20e12fc91104a2257a8e88e0e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59741"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59742"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93565d654979",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "666d5a91e02a5782dbbc2196fb71ac6f5fc32835508b7120e03f4c98d9c3625a",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "NetworkID": "f6d087d90d9a60fec835f9c6365cd7e1819a4856f9a6569205ba33cdfc735896",
	                    "EndpointID": "666d5a91e02a5782dbbc2196fb71ac6f5fc32835508b7120e03f4c98d9c3625a",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-044000 -n missing-upgrade-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-044000 -n missing-upgrade-044000: exit status 6 (368.255293ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:32:06.371359   20810 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-044000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-044000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-044000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-044000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-044000: (2.173435561s)
--- FAIL: TestMissingContainerUpgrade (43.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (41.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker 
E0103 12:33:37.615502   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker : exit status 70 (31.150379969s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-442000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3789089059
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:33:33.776445987 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-442000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:33:47.084445860 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-442000", then "minikube start -p stopped-upgrade-442000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 57.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 170.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 203.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 244.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 265.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 364.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 459.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:33:47.084445860 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker : exit status 70 (4.049706279s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-442000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig244263503
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-442000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.230338381.exe start -p stopped-upgrade-442000 --memory=2200 --vm-driver=docker : exit status 70 (3.934099576s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-442000] minikube v1.9.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig711325620
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-442000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (41.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (254.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0103 12:44:53.196300   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:58.421287   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:45:03.436886   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m14.030898664s)

                                                
                                                
-- stdout --
	* [old-k8s-version-079000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-079000 in cluster old-k8s-version-079000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:44:51.210715   27031 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:44:51.211008   27031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:44:51.211015   27031 out.go:309] Setting ErrFile to fd 2...
	I0103 12:44:51.211019   27031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:44:51.211209   27031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:44:51.212765   27031 out.go:303] Setting JSON to false
	I0103 12:44:51.236281   27031 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8061,"bootTime":1704306630,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:44:51.236381   27031 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:44:51.262714   27031 out.go:177] * [old-k8s-version-079000] minikube v1.32.0 on Darwin 14.2
	I0103 12:44:51.335634   27031 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:44:51.303424   27031 notify.go:220] Checking for updates...
	I0103 12:44:51.379408   27031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:44:51.422339   27031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:44:51.464486   27031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:44:51.507429   27031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:44:51.550412   27031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:44:51.573071   27031 config.go:182] Loaded profile config "false-236000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:44:51.573213   27031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:44:51.637903   27031 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:44:51.638062   27031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:44:51.770534   27031 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:44:51.755873052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:44:51.794209   27031 out.go:177] * Using the docker driver based on user configuration
	I0103 12:44:51.853354   27031 start.go:298] selected driver: docker
	I0103 12:44:51.853378   27031 start.go:902] validating driver "docker" against <nil>
	I0103 12:44:51.853396   27031 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:44:51.857572   27031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:44:51.979958   27031 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:44:51.968997522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:44:51.980173   27031 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 12:44:51.980369   27031 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 12:44:52.022381   27031 out.go:177] * Using Docker Desktop driver with root privileges
	I0103 12:44:52.060775   27031 cni.go:84] Creating CNI manager for ""
	I0103 12:44:52.060820   27031 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:44:52.060841   27031 start_flags.go:323] config:
	{Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:44:52.083344   27031 out.go:177] * Starting control plane node old-k8s-version-079000 in cluster old-k8s-version-079000
	I0103 12:44:52.115237   27031 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:44:52.152342   27031 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:44:52.210628   27031 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:44:52.210723   27031 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:44:52.210726   27031 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0103 12:44:52.210767   27031 cache.go:56] Caching tarball of preloaded images
	I0103 12:44:52.210966   27031 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 12:44:52.210989   27031 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0103 12:44:52.211189   27031 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/config.json ...
	I0103 12:44:52.212017   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/config.json: {Name:mk56023a73a3d270c799e06d56151a29d2835afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:44:52.289648   27031 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:44:52.289669   27031 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:44:52.289693   27031 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:44:52.289740   27031 start.go:365] acquiring machines lock for old-k8s-version-079000: {Name:mkefdae168ae5396c7edce5050a591938b306f62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:44:52.289910   27031 start.go:369] acquired machines lock for "old-k8s-version-079000" in 152.126µs
	I0103 12:44:52.289938   27031 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 12:44:52.289993   27031 start.go:125] createHost starting for "" (driver="docker")
	I0103 12:44:52.332299   27031 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 12:44:52.332685   27031 start.go:159] libmachine.API.Create for "old-k8s-version-079000" (driver="docker")
	I0103 12:44:52.332739   27031 client.go:168] LocalClient.Create starting
	I0103 12:44:52.332982   27031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem
	I0103 12:44:52.333068   27031 main.go:141] libmachine: Decoding PEM data...
	I0103 12:44:52.333102   27031 main.go:141] libmachine: Parsing certificate...
	I0103 12:44:52.333181   27031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem
	I0103 12:44:52.333249   27031 main.go:141] libmachine: Decoding PEM data...
	I0103 12:44:52.333265   27031 main.go:141] libmachine: Parsing certificate...
	I0103 12:44:52.334095   27031 cli_runner.go:164] Run: docker network inspect old-k8s-version-079000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 12:44:52.395751   27031 cli_runner.go:211] docker network inspect old-k8s-version-079000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 12:44:52.395850   27031 network_create.go:281] running [docker network inspect old-k8s-version-079000] to gather additional debugging logs...
	I0103 12:44:52.395867   27031 cli_runner.go:164] Run: docker network inspect old-k8s-version-079000
	W0103 12:44:52.447055   27031 cli_runner.go:211] docker network inspect old-k8s-version-079000 returned with exit code 1
	I0103 12:44:52.447086   27031 network_create.go:284] error running [docker network inspect old-k8s-version-079000]: docker network inspect old-k8s-version-079000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-079000 not found
	I0103 12:44:52.447097   27031 network_create.go:286] output of [docker network inspect old-k8s-version-079000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-079000 not found
	
	** /stderr **
	I0103 12:44:52.447287   27031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 12:44:52.505717   27031 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0103 12:44:52.507111   27031 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0103 12:44:52.508569   27031 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0103 12:44:52.508888   27031 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00226d730}
	I0103 12:44:52.508903   27031 network_create.go:124] attempt to create docker network old-k8s-version-079000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0103 12:44:52.508968   27031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-079000 old-k8s-version-079000
	I0103 12:44:52.608332   27031 network_create.go:108] docker network old-k8s-version-079000 192.168.76.0/24 created
	I0103 12:44:52.608373   27031 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-079000" container
	I0103 12:44:52.608491   27031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 12:44:52.660391   27031 cli_runner.go:164] Run: docker volume create old-k8s-version-079000 --label name.minikube.sigs.k8s.io=old-k8s-version-079000 --label created_by.minikube.sigs.k8s.io=true
	I0103 12:44:52.717794   27031 oci.go:103] Successfully created a docker volume old-k8s-version-079000
	I0103 12:44:52.717898   27031 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-079000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-079000 --entrypoint /usr/bin/test -v old-k8s-version-079000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 12:44:53.159395   27031 oci.go:107] Successfully prepared a docker volume old-k8s-version-079000
	I0103 12:44:53.159440   27031 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:44:53.159455   27031 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 12:44:53.159566   27031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-079000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 12:44:55.669710   27031 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-079000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (2.51009597s)
	I0103 12:44:55.669743   27031 kic.go:203] duration metric: took 2.510319 seconds to extract preloaded images to volume
	I0103 12:44:55.669895   27031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 12:44:55.780738   27031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-079000 --name old-k8s-version-079000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-079000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-079000 --network old-k8s-version-079000 --ip 192.168.76.2 --volume old-k8s-version-079000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 12:44:56.099441   27031 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Running}}
	I0103 12:44:56.167899   27031 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Status}}
	I0103 12:44:56.257653   27031 cli_runner.go:164] Run: docker exec old-k8s-version-079000 stat /var/lib/dpkg/alternatives/iptables
	I0103 12:44:56.415030   27031 oci.go:144] the created container "old-k8s-version-079000" has a running status.
	I0103 12:44:56.415073   27031 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa...
	I0103 12:44:56.582439   27031 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 12:44:56.652915   27031 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Status}}
	I0103 12:44:56.708628   27031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 12:44:56.708650   27031 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-079000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 12:44:56.796346   27031 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Status}}
	I0103 12:44:56.847446   27031 machine.go:88] provisioning docker machine ...
	I0103 12:44:56.847498   27031 ubuntu.go:169] provisioning hostname "old-k8s-version-079000"
	I0103 12:44:56.847596   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:56.902389   27031 main.go:141] libmachine: Using SSH client type: native
	I0103 12:44:56.902726   27031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61413 <nil> <nil>}
	I0103 12:44:56.902738   27031 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079000 && echo "old-k8s-version-079000" | sudo tee /etc/hostname
	I0103 12:44:57.033616   27031 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079000
	
	I0103 12:44:57.033721   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:57.086739   27031 main.go:141] libmachine: Using SSH client type: native
	I0103 12:44:57.087041   27031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61413 <nil> <nil>}
	I0103 12:44:57.087055   27031 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:44:57.206904   27031 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:44:57.206928   27031 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:44:57.206946   27031 ubuntu.go:177] setting up certificates
	I0103 12:44:57.206958   27031 provision.go:83] configureAuth start
	I0103 12:44:57.207034   27031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:44:57.258878   27031 provision.go:138] copyHostCerts
	I0103 12:44:57.258983   27031 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:44:57.258994   27031 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:44:57.259696   27031 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:44:57.259928   27031 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:44:57.259934   27031 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:44:57.260029   27031 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:44:57.298053   27031 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:44:57.298067   27031 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:44:57.298199   27031 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:44:57.298426   27031 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-079000]
	I0103 12:44:57.358778   27031 provision.go:172] copyRemoteCerts
	I0103 12:44:57.358842   27031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:44:57.358898   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:57.410549   27031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61413 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:44:57.496355   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:44:57.516611   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 12:44:57.537839   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 12:44:57.558787   27031 provision.go:86] duration metric: configureAuth took 351.817535ms
	I0103 12:44:57.558807   27031 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:44:57.558948   27031 config.go:182] Loaded profile config "old-k8s-version-079000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0103 12:44:57.559030   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:57.611666   27031 main.go:141] libmachine: Using SSH client type: native
	I0103 12:44:57.611965   27031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61413 <nil> <nil>}
	I0103 12:44:57.611978   27031 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:44:57.731560   27031 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:44:57.731576   27031 ubuntu.go:71] root file system type: overlay
	I0103 12:44:57.731658   27031 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:44:57.731737   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:57.789104   27031 main.go:141] libmachine: Using SSH client type: native
	I0103 12:44:57.789394   27031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61413 <nil> <nil>}
	I0103 12:44:57.789449   27031 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:44:57.921269   27031 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:44:57.921377   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:57.983252   27031 main.go:141] libmachine: Using SSH client type: native
	I0103 12:44:57.983708   27031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61413 <nil> <nil>}
	I0103 12:44:57.983740   27031 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:44:58.669585   27031 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-03 20:44:57.918598757 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0103 12:44:58.669608   27031 machine.go:91] provisioned docker machine in 1.82215482s
	I0103 12:44:58.669623   27031 client.go:171] LocalClient.Create took 6.336958006s
	I0103 12:44:58.669639   27031 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-079000" took 6.337037993s
	I0103 12:44:58.669647   27031 start.go:300] post-start starting for "old-k8s-version-079000" (driver="docker")
	I0103 12:44:58.669659   27031 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:44:58.669732   27031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:44:58.669800   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:58.722674   27031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61413 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:44:58.809487   27031 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:44:58.813382   27031 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:44:58.813409   27031 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:44:58.813417   27031 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:44:58.813423   27031 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:44:58.813434   27031 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:44:58.813540   27031 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:44:58.813724   27031 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:44:58.813926   27031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:44:58.822139   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:44:58.842884   27031 start.go:303] post-start completed in 173.229969ms
	I0103 12:44:58.843475   27031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:44:58.896088   27031 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/config.json ...
	I0103 12:44:58.896543   27031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:44:58.896603   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:58.951930   27031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61413 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:44:59.035364   27031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:44:59.040560   27031 start.go:128] duration metric: createHost completed in 6.750636066s
	I0103 12:44:59.040581   27031 start.go:83] releasing machines lock for "old-k8s-version-079000", held for 6.750748089s
	I0103 12:44:59.040671   27031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:44:59.102025   27031 ssh_runner.go:195] Run: cat /version.json
	I0103 12:44:59.102050   27031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:44:59.102102   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:59.102116   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:44:59.157595   27031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61413 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:44:59.157790   27031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61413 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:44:59.351982   27031 ssh_runner.go:195] Run: systemctl --version
	I0103 12:44:59.356875   27031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 12:44:59.362208   27031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 12:44:59.384999   27031 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 12:44:59.385112   27031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0103 12:44:59.400995   27031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0103 12:44:59.417063   27031 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 12:44:59.417080   27031 start.go:475] detecting cgroup driver to use...
	I0103 12:44:59.417102   27031 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:44:59.417230   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:44:59.433086   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0103 12:44:59.443203   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:44:59.453900   27031 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:44:59.453970   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:44:59.464608   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:44:59.474425   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:44:59.484813   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:44:59.494479   27031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:44:59.503553   27031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:44:59.513189   27031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:44:59.521523   27031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:44:59.530390   27031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:44:59.578998   27031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:44:59.656612   27031 start.go:475] detecting cgroup driver to use...
	I0103 12:44:59.656631   27031 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:44:59.656697   27031 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:44:59.675097   27031 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:44:59.675227   27031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:44:59.686857   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:44:59.703476   27031 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:44:59.708371   27031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:44:59.717518   27031 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:44:59.735531   27031 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:44:59.822020   27031 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:44:59.899083   27031 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:44:59.899183   27031 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:44:59.916245   27031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:45:00.005143   27031 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:45:00.246955   27031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:45:00.271083   27031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:45:00.356051   27031 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0103 12:45:00.356335   27031 cli_runner.go:164] Run: docker exec -t old-k8s-version-079000 dig +short host.docker.internal
	I0103 12:45:00.470868   27031 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:45:00.470955   27031 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:45:00.475831   27031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:45:00.486892   27031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:45:00.539385   27031 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:45:00.539475   27031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:45:00.559279   27031 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:45:00.559296   27031 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:45:00.559366   27031 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:45:00.567848   27031 ssh_runner.go:195] Run: which lz4
	I0103 12:45:00.572211   27031 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0103 12:45:00.576691   27031 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 12:45:00.576723   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0103 12:45:05.792466   27031 docker.go:635] Took 5.220369 seconds to copy over tarball
	I0103 12:45:05.792549   27031 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 12:45:07.369497   27031 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.576952841s)
	I0103 12:45:07.369511   27031 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 12:45:07.408767   27031 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:45:07.417975   27031 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0103 12:45:07.433868   27031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:45:07.490282   27031 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:45:08.173611   27031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:45:08.193391   27031 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:45:08.193406   27031 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:45:08.193417   27031 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 12:45:08.199771   27031 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:45:08.201209   27031 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:45:08.202687   27031 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 12:45:08.202732   27031 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 12:45:08.203037   27031 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:45:08.203747   27031 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:45:08.204999   27031 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:45:08.205711   27031 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:45:08.207026   27031 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:45:08.208927   27031 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 12:45:08.208939   27031 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 12:45:08.209325   27031 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:45:08.210717   27031 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:45:08.210789   27031 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:45:08.211430   27031 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:45:08.211652   27031 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:45:08.634613   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:45:08.640509   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 12:45:08.651550   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 12:45:08.658137   27031 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 12:45:08.658180   27031 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:45:08.658245   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:45:08.665482   27031 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 12:45:08.665516   27031 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0103 12:45:08.665572   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0103 12:45:08.678331   27031 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 12:45:08.678378   27031 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 12:45:08.678459   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0103 12:45:08.685721   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 12:45:08.689788   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 12:45:08.693304   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 12:45:08.705124   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 12:45:08.713545   27031 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 12:45:08.713576   27031 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:45:08.713642   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0103 12:45:08.715260   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:45:08.736444   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 12:45:08.737428   27031 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 12:45:08.737451   27031 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:45:08.737508   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:45:08.753860   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 12:45:08.799575   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:45:08.819941   27031 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 12:45:08.819974   27031 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:45:08.820045   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:45:08.838418   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 12:45:08.914538   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:45:08.935245   27031 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 12:45:08.935270   27031 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:45:08.935357   27031 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:45:08.957272   27031 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 12:45:09.194744   27031 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:45:09.214335   27031 cache_images.go:92] LoadImages completed in 1.020914653s
	W0103 12:45:09.214395   27031 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0103 12:45:09.214481   27031 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:45:09.264328   27031 cni.go:84] Creating CNI manager for ""
	I0103 12:45:09.264346   27031 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:45:09.264359   27031 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:45:09.264376   27031 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079000 NodeName:old-k8s-version-079000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 12:45:09.264479   27031 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-079000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:45:09.264529   27031 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-079000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:45:09.264589   27031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 12:45:09.273272   27031 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:45:09.273333   27031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:45:09.281649   27031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0103 12:45:09.297176   27031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 12:45:09.312525   27031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0103 12:45:09.328332   27031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:45:09.332604   27031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:45:09.343152   27031 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000 for IP: 192.168.76.2
	I0103 12:45:09.343173   27031 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.343364   27031 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:45:09.343444   27031 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:45:09.343489   27031 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.key
	I0103 12:45:09.343500   27031 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.crt with IP's: []
	I0103 12:45:09.583325   27031 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.crt ...
	I0103 12:45:09.583343   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.crt: {Name:mkee1890013c8c4e294fc0f434bcfa1f989bc2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.583694   27031 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.key ...
	I0103 12:45:09.583703   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.key: {Name:mkf9921ed5e670786ee16e432459c14235b8e3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.583941   27031 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key.31bdca25
	I0103 12:45:09.583957   27031 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 12:45:09.875191   27031 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt.31bdca25 ...
	I0103 12:45:09.875212   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt.31bdca25: {Name:mk5e1479834f0f5f73263414d178142e4b1af2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.875505   27031 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key.31bdca25 ...
	I0103 12:45:09.875515   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key.31bdca25: {Name:mk376cc06d9fa69620816b1e663255737bf8d9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.875728   27031 certs.go:337] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt
	I0103 12:45:09.875920   27031 certs.go:341] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key
	I0103 12:45:09.876091   27031 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key
	I0103 12:45:09.876105   27031 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.crt with IP's: []
	I0103 12:45:09.963129   27031 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.crt ...
	I0103 12:45:09.963144   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.crt: {Name:mk85e3216a0acaec8eaad264f9849327530b6f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.963468   27031 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key ...
	I0103 12:45:09.963477   27031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key: {Name:mke0886fb1fa6ecbe4b50b660e837c18f5841a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:45:09.963921   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:45:09.964005   27031 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:45:09.964019   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:45:09.964050   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:45:09.964080   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:45:09.964110   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:45:09.964178   27031 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:45:09.964681   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:45:09.985868   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 12:45:10.006828   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:45:10.027567   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 12:45:10.048305   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:45:10.069978   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:45:10.091588   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:45:10.112270   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:45:10.133219   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:45:10.154158   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:45:10.175256   27031 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:45:10.196415   27031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:45:10.212160   27031 ssh_runner.go:195] Run: openssl version
	I0103 12:45:10.217858   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:45:10.227319   27031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:45:10.231575   27031 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:45:10.231626   27031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:45:10.238375   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:45:10.247586   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:45:10.256488   27031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:45:10.260775   27031 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:45:10.260822   27031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:45:10.267428   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:45:10.276458   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:45:10.285641   27031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:45:10.289699   27031 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:45:10.289744   27031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:45:10.296558   27031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:45:10.305710   27031 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:45:10.310010   27031 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 12:45:10.310058   27031 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:45:10.310151   27031 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:45:10.327975   27031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:45:10.336645   27031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:45:10.345068   27031 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:45:10.345121   27031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:45:10.353372   27031 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:45:10.353405   27031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:45:10.403692   27031 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:45:10.403764   27031 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:45:10.658112   27031 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:45:10.658205   27031 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:45:10.658283   27031 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:45:10.866918   27031 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:45:10.867640   27031 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:45:10.873795   27031 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:45:10.942283   27031 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:45:10.985386   27031 out.go:204]   - Generating certificates and keys ...
	I0103 12:45:10.985519   27031 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:45:10.985651   27031 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:45:11.243563   27031 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 12:45:11.360551   27031 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 12:45:11.414108   27031 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 12:45:11.537057   27031 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 12:45:11.636525   27031 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 12:45:11.636652   27031 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0103 12:45:11.759019   27031 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 12:45:11.759246   27031 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0103 12:45:11.886502   27031 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 12:45:12.068550   27031 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 12:45:12.241042   27031 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 12:45:12.241109   27031 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:45:12.314947   27031 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:45:12.586345   27031 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:45:12.702648   27031 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:45:12.818204   27031 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:45:12.818834   27031 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:45:12.842372   27031 out.go:204]   - Booting up control plane ...
	I0103 12:45:12.842550   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:45:12.842662   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:45:12.842767   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:45:12.842907   27031 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:45:12.843186   27031 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:45:52.827039   27031 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:45:52.827706   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:45:52.827870   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:45:57.828312   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:45:57.828466   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:46:07.829236   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:46:07.829404   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:46:27.829759   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:46:27.829958   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:47:07.830421   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:47:07.830768   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:47:07.830933   27031 kubeadm.go:322] 
	I0103 12:47:07.830984   27031 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:47:07.831030   27031 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:47:07.831036   27031 kubeadm.go:322] 
	I0103 12:47:07.831076   27031 kubeadm.go:322] This error is likely caused by:
	I0103 12:47:07.831122   27031 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:47:07.831241   27031 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:47:07.831251   27031 kubeadm.go:322] 
	I0103 12:47:07.831345   27031 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:47:07.831386   27031 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:47:07.831427   27031 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:47:07.831435   27031 kubeadm.go:322] 
	I0103 12:47:07.831547   27031 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:47:07.831617   27031 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:47:07.831709   27031 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:47:07.831750   27031 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:47:07.831822   27031 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:47:07.831856   27031 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:47:07.833122   27031 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:47:07.833194   27031 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:47:07.833335   27031 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:47:07.833485   27031 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:47:07.833626   27031 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:47:07.833708   27031 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0103 12:47:07.833803   27031 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-079000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0103 12:47:07.833832   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:47:08.245098   27031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:47:08.256129   27031 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:47:08.256182   27031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:47:08.265049   27031 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:47:08.265070   27031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:47:08.316936   27031 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:47:08.316983   27031 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:47:08.552212   27031 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:47:08.552304   27031 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:47:08.552424   27031 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:47:08.726738   27031 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:47:08.727517   27031 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:47:08.733454   27031 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:47:08.801136   27031 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:47:08.821230   27031 out.go:204]   - Generating certificates and keys ...
	I0103 12:47:08.821293   27031 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:47:08.821366   27031 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:47:08.821436   27031 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:47:08.821489   27031 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:47:08.821557   27031 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:47:08.821605   27031 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:47:08.821657   27031 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:47:08.821698   27031 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:47:08.821771   27031 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:47:08.821837   27031 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:47:08.821874   27031 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:47:08.821921   27031 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:47:08.957021   27031 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:47:09.181613   27031 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:47:09.339438   27031 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:47:09.456909   27031 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:47:09.457324   27031 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:47:09.478896   27031 out.go:204]   - Booting up control plane ...
	I0103 12:47:09.479036   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:47:09.479158   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:47:09.479289   27031 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:47:09.479490   27031 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:47:09.479856   27031 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:47:49.465944   27031 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:47:49.467075   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:47:49.467264   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:47:54.468194   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:47:54.468411   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:48:04.469561   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:48:04.469794   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:48:24.471432   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:48:24.471647   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:49:04.676908   27031 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:49:04.677114   27031 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:49:04.677128   27031 kubeadm.go:322] 
	I0103 12:49:04.677180   27031 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:49:04.677226   27031 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:49:04.677234   27031 kubeadm.go:322] 
	I0103 12:49:04.677269   27031 kubeadm.go:322] This error is likely caused by:
	I0103 12:49:04.677319   27031 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:49:04.677471   27031 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:49:04.677493   27031 kubeadm.go:322] 
	I0103 12:49:04.677626   27031 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:49:04.677666   27031 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:49:04.677707   27031 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:49:04.677718   27031 kubeadm.go:322] 
	I0103 12:49:04.677839   27031 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:49:04.677952   27031 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:49:04.678013   27031 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:49:04.678049   27031 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:49:04.678104   27031 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:49:04.678135   27031 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:49:04.679506   27031 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:49:04.679580   27031 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:49:04.679693   27031 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:49:04.679775   27031 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:49:04.679858   27031 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:49:04.679917   27031 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0103 12:49:04.679964   27031 kubeadm.go:406] StartCluster complete in 3m54.168506428s
	I0103 12:49:04.680050   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:49:04.697783   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.697798   27031 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:49:04.697871   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:49:04.716774   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.716786   27031 logs.go:286] No container was found matching "etcd"
	I0103 12:49:04.716854   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:49:04.737044   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.737064   27031 logs.go:286] No container was found matching "coredns"
	I0103 12:49:04.737151   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:49:04.763048   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.763066   27031 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:49:04.763147   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:49:04.808026   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.808041   27031 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:49:04.808110   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:49:04.827220   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.827235   27031 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:49:04.827331   27031 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:49:04.850949   27031 logs.go:284] 0 containers: []
	W0103 12:49:04.850965   27031 logs.go:286] No container was found matching "kindnet"
	I0103 12:49:04.850980   27031 logs.go:123] Gathering logs for container status ...
	I0103 12:49:04.850988   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:49:04.909376   27031 logs.go:123] Gathering logs for kubelet ...
	I0103 12:49:04.909397   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:49:04.946103   27031 logs.go:123] Gathering logs for dmesg ...
	I0103 12:49:04.946118   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:49:04.959573   27031 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:49:04.959589   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:49:05.032189   27031 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:49:05.032202   27031 logs.go:123] Gathering logs for Docker ...
	I0103 12:49:05.032210   27031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0103 12:49:05.107826   27031 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0103 12:49:05.107854   27031 out.go:239] * 
	* 
	W0103 12:49:05.107889   27031 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:49:05.107903   27031 out.go:239] * 
	* 
	W0103 12:49:05.108509   27031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:49:05.170910   27031 out.go:177] 
	W0103 12:49:05.229089   27031 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:49:05.229173   27031 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0103 12:49:05.229206   27031 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0103 12:49:05.251134   27031 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:44:56.088641196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6357951fe2fce563f27bedcc1cb3c39c60c5eacdb7adf6e2528abd3637183942",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6357951fe2fc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "3167c2edbedae44e0602e2352e8bbfcda02969d3b3da3871fc35ad6f31fb9b2f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 6 (425.571313ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:49:05.916799   27932 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-079000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (254.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-079000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-079000 create -f testdata/busybox.yaml: exit status 1 (48.078615ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-079000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:44:56.088641196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6357951fe2fce563f27bedcc1cb3c39c60c5eacdb7adf6e2528abd3637183942",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6357951fe2fc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "3167c2edbedae44e0602e2352e8bbfcda02969d3b3da3871fc35ad6f31fb9b2f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 6 (425.56086ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:49:06.441273   27945 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-079000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:44:56.088641196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6357951fe2fce563f27bedcc1cb3c39c60c5eacdb7adf6e2528abd3637183942",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6357951fe2fc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "3167c2edbedae44e0602e2352e8bbfcda02969d3b3da3871fc35ad6f31fb9b2f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 6 (393.270829ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:49:06.899114   27957 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-079000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (117.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-079000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0103 12:49:07.236606   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:08.905071   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:08.910181   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:08.920621   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:08.940768   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:08.982986   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:09.063558   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:09.224082   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:09.544178   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:10.184874   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:11.465065   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:12.357740   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:14.025587   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:19.146429   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:19.932058   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:49:22.598355   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:22.684159   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:49:29.387136   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:39.635490   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:49:43.079385   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:43.160237   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:49:49.868422   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:49:58.623409   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:50:01.075146   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:50:02.880017   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:50:10.870534   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:50:16.445987   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.451429   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.461702   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.482266   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.522841   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.605057   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:16.765340   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:17.086352   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:17.726657   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:19.006817   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:21.567274   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:24.040826   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:50:26.688257   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:30.829886   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:50:36.930501   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:50:57.411319   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:51:03.001180   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-079000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m57.122224686s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-079000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-079000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-079000 describe deploy/metrics-server -n kube-system: exit status 1 (37.035489ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-079000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-079000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:44:56.088641196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6357951fe2fce563f27bedcc1cb3c39c60c5eacdb7adf6e2528abd3637183942",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6357951fe2fc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "3167c2edbedae44e0602e2352e8bbfcda02969d3b3da3871fc35ad6f31fb9b2f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 6 (394.166327ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:51:04.507283   28008 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-079000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (117.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (504.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0103 12:51:30.717949   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:51:38.373122   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:51:45.963723   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:51:49.161733   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:51:52.752921   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
E0103 12:52:17.234420   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:52:19.038701   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:52:44.920150   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:52:46.724384   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:52:56.881470   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:53:00.295432   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
E0103 12:53:12.617401   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:54:02.113963   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:54:08.912871   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m22.297973363s)

                                                
                                                
-- stdout --
	* [old-k8s-version-079000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-079000 in cluster old-k8s-version-079000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "old-k8s-version-079000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:51:06.593034   28038 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:51:06.593255   28038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:51:06.593261   28038 out.go:309] Setting ErrFile to fd 2...
	I0103 12:51:06.593265   28038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:51:06.593450   28038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:51:06.594826   28038 out.go:303] Setting JSON to false
	I0103 12:51:06.617149   28038 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8436,"bootTime":1704306630,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:51:06.617260   28038 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:51:06.639342   28038 out.go:177] * [old-k8s-version-079000] minikube v1.32.0 on Darwin 14.2
	I0103 12:51:06.681997   28038 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:51:06.682093   28038 notify.go:220] Checking for updates...
	I0103 12:51:06.703881   28038 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:51:06.724767   28038 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:51:06.745735   28038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:51:06.766822   28038 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:51:06.787675   28038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:51:06.809602   28038 config.go:182] Loaded profile config "old-k8s-version-079000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0103 12:51:06.831892   28038 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 12:51:06.852674   28038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:51:06.910819   28038 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:51:06.910967   28038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:51:07.014482   28038 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:51:07.004260622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:51:07.056312   28038 out.go:177] * Using the docker driver based on existing profile
	I0103 12:51:07.077304   28038 start.go:298] selected driver: docker
	I0103 12:51:07.077326   28038 start.go:902] validating driver "docker" against &{Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:51:07.077438   28038 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:51:07.080577   28038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:51:07.184801   28038 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:51:07.174313337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:51:07.185036   28038 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 12:51:07.185111   28038 cni.go:84] Creating CNI manager for ""
	I0103 12:51:07.185125   28038 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:51:07.185134   28038 start_flags.go:323] config:
	{Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:51:07.206874   28038 out.go:177] * Starting control plane node old-k8s-version-079000 in cluster old-k8s-version-079000
	I0103 12:51:07.227381   28038 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:51:07.248345   28038 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:51:07.290465   28038 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:51:07.290549   28038 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0103 12:51:07.290580   28038 cache.go:56] Caching tarball of preloaded images
	I0103 12:51:07.290582   28038 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:51:07.290804   28038 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 12:51:07.290825   28038 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0103 12:51:07.290958   28038 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/config.json ...
	I0103 12:51:07.343216   28038 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:51:07.343244   28038 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:51:07.343286   28038 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:51:07.343347   28038 start.go:365] acquiring machines lock for old-k8s-version-079000: {Name:mkefdae168ae5396c7edce5050a591938b306f62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:51:07.343472   28038 start.go:369] acquired machines lock for "old-k8s-version-079000" in 95.775µs
	I0103 12:51:07.343499   28038 start.go:96] Skipping create...Using existing machine configuration
	I0103 12:51:07.343508   28038 fix.go:54] fixHost starting: 
	I0103 12:51:07.343763   28038 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Status}}
	I0103 12:51:07.394818   28038 fix.go:102] recreateIfNeeded on old-k8s-version-079000: state=Stopped err=<nil>
	W0103 12:51:07.394859   28038 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 12:51:07.416217   28038 out.go:177] * Restarting existing docker container for "old-k8s-version-079000" ...
	I0103 12:51:07.474400   28038 cli_runner.go:164] Run: docker start old-k8s-version-079000
	I0103 12:51:07.728899   28038 cli_runner.go:164] Run: docker container inspect old-k8s-version-079000 --format={{.State.Status}}
	I0103 12:51:07.784946   28038 kic.go:430] container "old-k8s-version-079000" state is running.
	I0103 12:51:07.785551   28038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:51:07.840714   28038 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/config.json ...
	I0103 12:51:07.841125   28038 machine.go:88] provisioning docker machine ...
	I0103 12:51:07.841151   28038 ubuntu.go:169] provisioning hostname "old-k8s-version-079000"
	I0103 12:51:07.841227   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:07.911252   28038 main.go:141] libmachine: Using SSH client type: native
	I0103 12:51:07.911772   28038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0103 12:51:07.911794   28038 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079000 && echo "old-k8s-version-079000" | sudo tee /etc/hostname
	I0103 12:51:07.913539   28038 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0103 12:51:11.042238   28038 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079000
	
	I0103 12:51:11.042329   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:11.094648   28038 main.go:141] libmachine: Using SSH client type: native
	I0103 12:51:11.094941   28038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0103 12:51:11.094955   28038 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:51:11.214941   28038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:51:11.214969   28038 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:51:11.214997   28038 ubuntu.go:177] setting up certificates
	I0103 12:51:11.215023   28038 provision.go:83] configureAuth start
	I0103 12:51:11.215093   28038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:51:11.267628   28038 provision.go:138] copyHostCerts
	I0103 12:51:11.267733   28038 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:51:11.267743   28038 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:51:11.267884   28038 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:51:11.268124   28038 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:51:11.268131   28038 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:51:11.268227   28038 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:51:11.268428   28038 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:51:11.268435   28038 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:51:11.268525   28038 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:51:11.269118   28038 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-079000]
	I0103 12:51:11.432227   28038 provision.go:172] copyRemoteCerts
	I0103 12:51:11.432355   28038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:51:11.432449   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:11.484958   28038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:51:11.571806   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:51:11.592005   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 12:51:11.612914   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 12:51:11.635501   28038 provision.go:86] duration metric: configureAuth took 420.449129ms
	I0103 12:51:11.635517   28038 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:51:11.635700   28038 config.go:182] Loaded profile config "old-k8s-version-079000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0103 12:51:11.635781   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:11.688333   28038 main.go:141] libmachine: Using SSH client type: native
	I0103 12:51:11.688633   28038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0103 12:51:11.688646   28038 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:51:11.806423   28038 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:51:11.806442   28038 ubuntu.go:71] root file system type: overlay
	I0103 12:51:11.806538   28038 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:51:11.806641   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:11.859403   28038 main.go:141] libmachine: Using SSH client type: native
	I0103 12:51:11.859716   28038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0103 12:51:11.859765   28038 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:51:11.987932   28038 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:51:11.988023   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:12.039769   28038 main.go:141] libmachine: Using SSH client type: native
	I0103 12:51:12.040063   28038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0103 12:51:12.040078   28038 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:51:12.161655   28038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:51:12.161673   28038 machine.go:91] provisioned docker machine in 4.320429076s
	I0103 12:51:12.161680   28038 start.go:300] post-start starting for "old-k8s-version-079000" (driver="docker")
	I0103 12:51:12.161690   28038 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:51:12.161783   28038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:51:12.161839   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:12.213812   28038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:51:12.300926   28038 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:51:12.304826   28038 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:51:12.304853   28038 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:51:12.304861   28038 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:51:12.304867   28038 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:51:12.304879   28038 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:51:12.304976   28038 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:51:12.305167   28038 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:51:12.305367   28038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:51:12.313577   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:51:12.334449   28038 start.go:303] post-start completed in 172.754737ms
	I0103 12:51:12.334528   28038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:51:12.334627   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:12.386525   28038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:51:12.469831   28038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:51:12.474834   28038 fix.go:56] fixHost completed within 5.131191222s
	I0103 12:51:12.474852   28038 start.go:83] releasing machines lock for "old-k8s-version-079000", held for 5.131239269s
	I0103 12:51:12.474934   28038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-079000
	I0103 12:51:12.526863   28038 ssh_runner.go:195] Run: cat /version.json
	I0103 12:51:12.526869   28038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:51:12.526938   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:12.526955   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:12.580499   28038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:51:12.580556   28038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/old-k8s-version-079000/id_rsa Username:docker}
	I0103 12:51:12.767852   28038 ssh_runner.go:195] Run: systemctl --version
	I0103 12:51:12.772759   28038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0103 12:51:12.777711   28038 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0103 12:51:12.777762   28038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0103 12:51:12.786350   28038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0103 12:51:12.794684   28038 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0103 12:51:12.794700   28038 start.go:475] detecting cgroup driver to use...
	I0103 12:51:12.794717   28038 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:51:12.794825   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:51:12.809547   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0103 12:51:12.819338   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:51:12.828868   28038 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:51:12.828932   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:51:12.838242   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:51:12.847734   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:51:12.857382   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:51:12.866764   28038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:51:12.875557   28038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:51:12.885239   28038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:51:12.893403   28038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:51:12.901362   28038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:51:12.952787   28038 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:51:13.032352   28038 start.go:475] detecting cgroup driver to use...
	I0103 12:51:13.032378   28038 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:51:13.032465   28038 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:51:13.044039   28038 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:51:13.044126   28038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:51:13.056177   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:51:13.072981   28038 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:51:13.077400   28038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:51:13.086495   28038 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:51:13.103392   28038 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:51:13.184451   28038 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:51:13.237712   28038 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:51:13.237805   28038 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:51:13.275499   28038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:51:13.330387   28038 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:51:13.567940   28038 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:51:13.592892   28038 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:51:13.691923   28038 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0103 12:51:13.692003   28038 cli_runner.go:164] Run: docker exec -t old-k8s-version-079000 dig +short host.docker.internal
	I0103 12:51:13.812745   28038 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:51:13.812842   28038 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:51:13.817552   28038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:51:13.828192   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:13.880043   28038 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 12:51:13.880120   28038 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:51:13.899933   28038 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:51:13.899948   28038 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:51:13.900021   28038 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:51:13.908423   28038 ssh_runner.go:195] Run: which lz4
	I0103 12:51:13.912641   28038 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0103 12:51:13.916764   28038 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 12:51:13.916796   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0103 12:51:19.083092   28038 docker.go:635] Took 5.170378 seconds to copy over tarball
	I0103 12:51:19.083172   28038 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 12:51:20.640749   28038 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.557517132s)
	I0103 12:51:20.640782   28038 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 12:51:20.679993   28038 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0103 12:51:20.688896   28038 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0103 12:51:20.704680   28038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:51:20.758993   28038 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:51:21.131952   28038 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:51:21.151986   28038 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0103 12:51:21.152001   28038 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0103 12:51:21.152013   28038 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 12:51:21.157685   28038 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:51:21.158147   28038 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:51:21.158333   28038 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:51:21.158337   28038 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0103 12:51:21.158613   28038 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:51:21.158681   28038 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:51:21.158620   28038 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0103 12:51:21.158994   28038 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:51:21.163588   28038 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:51:21.163637   28038 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:51:21.163686   28038 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0103 12:51:21.165429   28038 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:51:21.165349   28038 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:51:21.165693   28038 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:51:21.165736   28038 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0103 12:51:21.167373   28038 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:51:21.603999   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:51:21.605459   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:51:21.607526   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0103 12:51:21.639520   28038 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0103 12:51:21.639571   28038 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0103 12:51:21.639630   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0103 12:51:21.640140   28038 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0103 12:51:21.640172   28038 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:51:21.640183   28038 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0103 12:51:21.640208   28038 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:51:21.640256   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0103 12:51:21.640259   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0103 12:51:21.649431   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:51:21.653890   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:51:21.679608   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0103 12:51:21.679660   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0103 12:51:21.679696   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0103 12:51:21.688956   28038 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0103 12:51:21.688988   28038 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:51:21.689074   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0103 12:51:21.691923   28038 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0103 12:51:21.691955   28038 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:51:21.692021   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0103 12:51:21.711475   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0103 12:51:21.714486   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0103 12:51:21.744411   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0103 12:51:21.763670   28038 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0103 12:51:21.763698   28038 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0103 12:51:21.763763   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0103 12:51:21.781090   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0103 12:51:21.783568   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0103 12:51:21.800170   28038 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0103 12:51:21.800197   28038 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0103 12:51:21.800257   28038 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0103 12:51:21.818323   28038 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0103 12:51:22.209060   28038 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 12:51:22.227834   28038 cache_images.go:92] LoadImages completed in 1.075781936s
	W0103 12:51:22.227888   28038 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0103 12:51:22.227983   28038 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:51:22.275496   28038 cni.go:84] Creating CNI manager for ""
	I0103 12:51:22.275513   28038 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 12:51:22.275526   28038 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:51:22.275543   28038 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079000 NodeName:old-k8s-version-079000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 12:51:22.275661   28038 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-079000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:51:22.275726   28038 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-079000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:51:22.275790   28038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0103 12:51:22.284387   28038 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:51:22.284448   28038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:51:22.292966   28038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0103 12:51:22.308855   28038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 12:51:22.324464   28038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0103 12:51:22.341599   28038 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:51:22.345923   28038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:51:22.356692   28038 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000 for IP: 192.168.76.2
	I0103 12:51:22.356713   28038 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:51:22.356907   28038 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:51:22.356986   28038 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:51:22.357100   28038 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/client.key
	I0103 12:51:22.357193   28038 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key.31bdca25
	I0103 12:51:22.357263   28038 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key
	I0103 12:51:22.357478   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:51:22.357523   28038 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:51:22.357533   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:51:22.357564   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:51:22.357596   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:51:22.357624   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:51:22.357689   28038 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:51:22.358277   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:51:22.379322   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 12:51:22.399777   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:51:22.420968   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/old-k8s-version-079000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 12:51:22.441486   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:51:22.461904   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:51:22.482716   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:51:22.504161   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:51:22.524835   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:51:22.545417   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:51:22.565901   28038 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:51:22.586447   28038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:51:22.602369   28038 ssh_runner.go:195] Run: openssl version
	I0103 12:51:22.607922   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:51:22.617283   28038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:51:22.621467   28038 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:51:22.621528   28038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:51:22.628038   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:51:22.636423   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:51:22.645520   28038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:51:22.650119   28038 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:51:22.650162   28038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:51:22.656775   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:51:22.665201   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:51:22.674250   28038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:51:22.678450   28038 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:51:22.678497   28038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:51:22.685246   28038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:51:22.693746   28038 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:51:22.697885   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 12:51:22.704128   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 12:51:22.710446   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 12:51:22.716720   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 12:51:22.723173   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 12:51:22.729306   28038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 12:51:22.735892   28038 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:51:22.736009   28038 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:51:22.755741   28038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:51:22.764628   28038 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 12:51:22.764644   28038 kubeadm.go:636] restartCluster start
	I0103 12:51:22.764696   28038 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 12:51:22.772802   28038 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:22.772872   28038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-079000
	I0103 12:51:22.825205   28038 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-079000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:51:22.825351   28038 kubeconfig.go:146] "old-k8s-version-079000" context is missing from /Users/jenkins/minikube-integration/17885-10646/kubeconfig - will repair!
	I0103 12:51:22.825662   28038 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:51:22.827154   28038 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 12:51:22.836054   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:22.836105   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:22.845403   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:23.336145   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:23.336228   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:23.346577   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:23.836370   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:23.836535   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:23.847433   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:24.336290   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:24.336427   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:24.347137   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:24.836213   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:24.836323   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:24.847494   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:25.336999   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:25.337097   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:25.346849   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:25.836279   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:25.836461   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:25.847394   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:26.337045   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:26.337276   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:26.353702   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:26.836678   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:26.836795   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:26.848727   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:27.336360   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:27.336433   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:27.346460   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:27.836354   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:27.836455   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:27.847255   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:28.337281   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:28.337347   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:28.347162   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:28.836607   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:28.836711   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:28.848051   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:29.336429   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:29.336577   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:29.346831   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:29.836326   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:29.836532   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:29.847798   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:30.337122   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:30.337251   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:30.346816   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:30.836489   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:30.836654   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:30.847766   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:31.337120   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:31.337284   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:31.346841   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:31.836384   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:31.836457   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:31.846690   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:32.336371   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:32.336461   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:32.346811   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:32.836421   28038 api_server.go:166] Checking apiserver status ...
	I0103 12:51:32.836597   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:51:32.847471   28038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:51:32.847485   28038 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 12:51:32.847499   28038 kubeadm.go:1135] stopping kube-system containers ...
	I0103 12:51:32.847583   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:51:32.865485   28038 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 12:51:32.876772   28038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:51:32.885420   28038 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Jan  3 20:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  3 20:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jan  3 20:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan  3 20:47 /etc/kubernetes/scheduler.conf
	
	I0103 12:51:32.885482   28038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0103 12:51:32.893791   28038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0103 12:51:32.902222   28038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0103 12:51:32.910743   28038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0103 12:51:32.920200   28038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:51:32.928607   28038 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 12:51:32.928624   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:51:32.980195   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:51:34.271403   28038 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.291156258s)
	I0103 12:51:34.271418   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:51:34.459934   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:51:34.520653   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:51:34.600210   28038 api_server.go:52] waiting for apiserver process to appear ...
	I0103 12:51:34.600285   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:35.101013   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:35.600678   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:36.101078   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:36.600470   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:37.102028   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:37.600944   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:38.100864   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:38.600608   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:39.100617   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:39.600916   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:40.100957   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:40.600777   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:41.101773   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:41.600627   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:42.100573   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:42.601031   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:43.101362   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:43.601028   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:44.101927   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:44.600753   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:45.100741   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:45.600797   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:46.101243   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:46.601714   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:47.101202   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:47.601415   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:48.102265   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:48.602797   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:49.100844   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:49.600799   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:50.102019   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:50.602510   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:51.101882   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:51.600886   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:52.101239   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:52.600988   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:53.101108   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:53.600956   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:54.100955   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:54.601947   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:55.100927   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:55.601538   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:56.103077   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:56.601408   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:57.102441   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:57.601467   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:58.101365   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:58.601238   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:59.101224   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:51:59.601056   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:00.102186   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:00.603107   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:01.101825   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:01.602197   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:02.101110   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:02.601752   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:03.102104   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:03.602432   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:04.101425   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:04.601217   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:05.101704   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:05.601329   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:06.101560   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:06.602693   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:07.101412   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:07.601342   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:08.102559   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:08.601319   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:09.101623   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:09.601641   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:10.101359   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:10.602151   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:11.102101   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:11.602884   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:12.101954   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:12.603465   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:13.101672   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:13.601448   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:14.102400   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:14.603514   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:15.103241   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:15.602160   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:16.101493   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:16.602098   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:17.102159   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:17.601564   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:18.101539   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:18.601966   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:19.101769   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:19.602231   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:20.101573   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:20.601719   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:21.101610   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:21.601714   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:22.101799   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:22.602088   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:23.101988   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:23.602826   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:24.101818   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:24.602293   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:25.101767   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:25.601954   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:26.101969   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:26.601854   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:27.102323   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:27.601753   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:28.103850   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:28.602230   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:29.101972   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:29.602919   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:30.101874   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:30.602227   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:31.102502   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:31.602219   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:32.102262   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:32.602190   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:33.102415   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:33.603793   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:34.102237   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:34.602135   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:34.622341   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.622356   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:34.622424   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:34.641241   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.641257   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:34.641327   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:34.659331   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.659345   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:34.659421   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:34.677801   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.677817   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:34.677900   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:34.697357   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.697371   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:34.697444   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:34.716213   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.716227   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:34.716302   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:34.734715   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.734742   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:34.734812   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:34.754752   28038 logs.go:284] 0 containers: []
	W0103 12:52:34.754766   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:34.754778   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:34.754786   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:34.790456   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:34.790473   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:34.803470   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:34.803487   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:34.867090   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:34.867109   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:34.867118   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:34.881894   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:34.881913   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:37.446837   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:37.456935   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:37.476790   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.476817   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:37.476894   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:37.496347   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.496362   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:37.496452   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:37.515436   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.515449   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:37.515545   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:37.534624   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.534636   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:37.534705   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:37.554607   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.554621   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:37.554709   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:37.588558   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.588572   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:37.588644   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:37.607262   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.607277   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:37.607356   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:37.625571   28038 logs.go:284] 0 containers: []
	W0103 12:52:37.625585   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:37.625593   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:37.625608   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:37.675284   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:37.675301   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:37.709713   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:37.709729   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:37.722204   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:37.722221   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:37.775175   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:37.775191   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:37.775200   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:40.289952   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:40.300888   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:40.319555   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.319569   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:40.319637   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:40.338826   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.338845   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:40.338930   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:40.358335   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.358351   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:40.358426   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:40.377084   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.377098   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:40.377175   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:40.394555   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.394570   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:40.394646   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:40.412399   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.412413   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:40.412480   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:40.432268   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.432282   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:40.432355   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:40.450564   28038 logs.go:284] 0 containers: []
	W0103 12:52:40.450579   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:40.450586   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:40.450592   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:40.486131   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:40.486150   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:40.501000   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:40.501022   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:40.582334   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:40.582347   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:40.582356   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:40.597112   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:40.597126   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:43.149958   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:43.161188   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:43.179701   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.179716   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:43.179790   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:43.197835   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.197848   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:43.197926   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:43.216203   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.216217   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:43.216293   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:43.235271   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.235284   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:43.235351   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:43.254158   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.254171   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:43.254245   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:43.272816   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.272830   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:43.272938   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:43.290552   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.290565   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:43.290640   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:43.309972   28038 logs.go:284] 0 containers: []
	W0103 12:52:43.309992   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:43.310008   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:43.310023   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:43.324704   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:43.324719   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:43.375990   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:43.376005   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:43.410922   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:43.410938   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:43.424204   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:43.424220   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:43.473949   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:45.974537   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:45.985970   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:46.004406   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.004421   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:46.004493   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:46.023212   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.023225   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:46.023283   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:46.042053   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.042066   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:46.042141   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:46.060908   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.060925   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:46.061030   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:46.081550   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.081565   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:46.081644   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:46.100265   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.100283   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:46.100347   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:46.120571   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.120586   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:46.120651   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:46.138721   28038 logs.go:284] 0 containers: []
	W0103 12:52:46.138735   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:46.138743   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:46.138751   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:46.191848   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:46.191862   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:46.191871   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:46.206691   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:46.206709   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:46.259111   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:46.259126   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:46.295215   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:46.295231   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:48.808392   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:48.819504   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:48.837016   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.837035   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:48.837128   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:48.855274   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.855291   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:48.855372   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:48.874389   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.874404   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:48.874480   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:48.892510   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.892523   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:48.892605   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:48.911505   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.911518   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:48.911579   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:48.931146   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.931161   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:48.931231   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:48.949473   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.949487   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:48.949576   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:48.969429   28038 logs.go:284] 0 containers: []
	W0103 12:52:48.969443   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:48.969450   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:48.969470   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:49.028302   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:49.028319   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:49.028330   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:49.085355   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:49.085370   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:49.137844   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:49.137859   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:49.174468   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:49.174492   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:51.688552   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:51.700199   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:51.717444   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.717459   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:51.717527   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:51.734968   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.734982   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:51.735072   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:51.753684   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.753697   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:51.753768   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:51.771578   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.771593   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:51.771674   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:51.790667   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.790680   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:51.790768   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:51.808251   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.808263   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:51.808328   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:51.826103   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.826126   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:51.826212   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:51.845055   28038 logs.go:284] 0 containers: []
	W0103 12:52:51.845069   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:51.845077   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:51.845085   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:51.880045   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:51.880061   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:51.892755   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:51.892770   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:51.945634   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:51.945647   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:51.945656   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:51.960585   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:51.960607   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:54.517331   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:54.528943   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:54.547977   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.547991   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:54.548062   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:54.566732   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.566746   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:54.566819   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:54.585015   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.585029   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:54.585095   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:54.603841   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.603854   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:54.603935   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:54.622830   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.622844   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:54.622910   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:54.642697   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.642713   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:54.642781   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:54.661659   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.661675   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:54.661746   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:54.681149   28038 logs.go:284] 0 containers: []
	W0103 12:52:54.681164   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:54.681171   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:54.681180   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:52:54.730786   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:54.730804   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:54.766483   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:54.766501   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:54.779030   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:54.779046   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:54.834349   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:54.834365   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:54.834378   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:57.349432   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:52:57.359297   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:52:57.378302   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.378317   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:52:57.378387   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:52:57.396919   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.396934   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:52:57.397013   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:52:57.414903   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.414918   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:52:57.415002   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:52:57.434728   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.434742   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:52:57.434809   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:52:57.454321   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.454335   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:52:57.454406   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:52:57.473662   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.473674   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:52:57.473760   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:52:57.492153   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.492178   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:52:57.492251   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:52:57.510485   28038 logs.go:284] 0 containers: []
	W0103 12:52:57.510500   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:52:57.510507   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:52:57.510523   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:52:57.545268   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:52:57.545285   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:52:57.557879   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:52:57.557892   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:52:57.614169   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:52:57.614182   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:52:57.614190   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:52:57.628791   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:52:57.628808   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:00.183286   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:00.193556   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:00.211161   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.211174   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:00.211244   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:00.229268   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.229281   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:00.229973   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:00.249373   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.249400   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:00.249486   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:00.267695   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.267711   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:00.267791   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:00.286544   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.286557   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:00.286625   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:00.305729   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.305742   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:00.305817   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:00.325178   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.325198   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:00.325279   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:00.344387   28038 logs.go:284] 0 containers: []
	W0103 12:53:00.344401   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:00.344408   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:00.344416   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:00.378947   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:00.378962   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:00.391556   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:00.391570   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:00.444848   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:00.444864   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:00.444874   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:00.459412   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:00.459430   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:03.084237   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:03.094757   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:03.114153   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.114172   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:03.114246   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:03.133078   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.133092   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:03.133163   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:03.151688   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.151701   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:03.151774   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:03.169249   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.169263   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:03.169335   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:03.187872   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.187887   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:03.187955   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:03.206892   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.206906   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:03.206976   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:03.225538   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.225555   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:03.225619   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:03.244282   28038 logs.go:284] 0 containers: []
	W0103 12:53:03.244296   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:03.244304   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:03.244311   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:03.258858   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:03.258877   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:03.310723   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:03.310739   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:03.346088   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:03.346104   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:03.359081   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:03.359095   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:03.408199   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:05.908698   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:05.920725   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:05.938538   28038 logs.go:284] 0 containers: []
	W0103 12:53:05.938551   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:05.938627   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:05.957347   28038 logs.go:284] 0 containers: []
	W0103 12:53:05.957361   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:05.957429   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:05.976235   28038 logs.go:284] 0 containers: []
	W0103 12:53:05.976248   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:05.976323   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:05.994070   28038 logs.go:284] 0 containers: []
	W0103 12:53:05.994085   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:05.994166   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:06.014973   28038 logs.go:284] 0 containers: []
	W0103 12:53:06.014988   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:06.015058   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:06.033392   28038 logs.go:284] 0 containers: []
	W0103 12:53:06.033406   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:06.033488   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:06.051639   28038 logs.go:284] 0 containers: []
	W0103 12:53:06.051657   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:06.051749   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:06.071878   28038 logs.go:284] 0 containers: []
	W0103 12:53:06.071893   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:06.071901   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:06.071912   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:06.086554   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:06.086569   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:06.138480   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:06.138505   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:06.173933   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:06.173949   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:06.186736   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:06.186750   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:06.240746   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:08.742179   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:08.754017   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:08.773584   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.773599   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:08.773669   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:08.793211   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.793224   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:08.793300   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:08.814031   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.814043   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:08.814103   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:08.833366   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.833380   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:08.833456   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:08.852614   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.852633   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:08.852707   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:08.871119   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.871134   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:08.871202   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:08.891012   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.891026   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:08.891100   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:08.908995   28038 logs.go:284] 0 containers: []
	W0103 12:53:08.909009   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:08.909016   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:08.909023   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:08.946316   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:08.946331   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:08.959175   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:08.959195   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:09.016061   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:09.016073   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:09.016081   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:09.030579   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:09.030600   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:11.582801   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:11.593203   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:11.612398   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.618319   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:11.618385   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:11.637587   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.637603   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:11.637702   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:11.657091   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.657106   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:11.657171   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:11.675756   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.675773   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:11.675847   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:11.693328   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.693342   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:11.693415   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:11.713310   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.713325   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:11.713396   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:11.731485   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.731503   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:11.731571   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:11.750171   28038 logs.go:284] 0 containers: []
	W0103 12:53:11.750186   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:11.750194   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:11.750201   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:11.800912   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:11.800925   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:11.800934   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:11.815540   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:11.815555   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:11.865334   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:11.865354   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:11.900487   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:11.900503   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:14.413550   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:14.424679   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:14.442602   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.442617   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:14.442703   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:14.462473   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.462487   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:14.462570   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:14.482300   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.482316   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:14.482381   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:14.506312   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.506330   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:14.506398   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:14.529292   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.529307   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:14.529365   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:14.549813   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.549832   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:14.549919   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:14.591214   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.591228   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:14.591302   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:14.609896   28038 logs.go:284] 0 containers: []
	W0103 12:53:14.609912   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:14.609920   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:14.609932   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:14.622716   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:14.622734   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:14.671216   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:14.671238   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:14.671252   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:14.686029   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:14.686044   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:14.737577   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:14.737593   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:17.276041   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:17.286828   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:17.305179   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.305193   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:17.305263   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:17.324360   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.324374   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:17.324448   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:17.344180   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.344195   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:17.344269   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:17.362336   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.362350   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:17.362422   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:17.383902   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.383925   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:17.384027   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:17.411999   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.412012   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:17.412086   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:17.430887   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.430902   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:17.430970   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:17.462614   28038 logs.go:284] 0 containers: []
	W0103 12:53:17.462663   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:17.462684   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:17.462697   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:17.505777   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:17.505798   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:17.588733   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:17.588755   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:17.644036   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:17.644051   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:17.644060   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:17.658629   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:17.658645   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:20.220968   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:20.230747   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:20.250090   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.250105   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:20.250172   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:20.268858   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.268879   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:20.268960   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:20.288742   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.288756   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:20.288828   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:20.308300   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.308313   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:20.308382   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:20.326061   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.326075   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:20.326146   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:20.346532   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.346546   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:20.346614   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:20.365952   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.365965   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:20.366034   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:20.383895   28038 logs.go:284] 0 containers: []
	W0103 12:53:20.383910   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:20.383917   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:20.383925   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:20.418635   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:20.418650   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:20.431221   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:20.431236   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:20.488170   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:20.488183   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:20.488195   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:20.502646   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:20.502661   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:23.051428   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:23.061516   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:23.080529   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.080544   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:23.080617   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:23.100217   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.100231   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:23.100301   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:23.118960   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.118980   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:23.119050   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:23.139322   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.139337   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:23.139411   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:23.158550   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.158564   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:23.158638   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:23.177557   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.177575   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:23.177647   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:23.197754   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.197768   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:23.197836   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:23.217041   28038 logs.go:284] 0 containers: []
	W0103 12:53:23.217055   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:23.217062   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:23.217069   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:23.229754   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:23.229770   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:23.296316   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:23.296329   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:23.296338   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:23.310749   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:23.310766   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:23.365967   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:23.365983   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:25.901469   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:25.912497   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:25.932027   28038 logs.go:284] 0 containers: []
	W0103 12:53:25.932042   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:25.932112   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:25.952027   28038 logs.go:284] 0 containers: []
	W0103 12:53:25.952043   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:25.952121   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:25.971565   28038 logs.go:284] 0 containers: []
	W0103 12:53:25.971579   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:25.971654   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:25.990285   28038 logs.go:284] 0 containers: []
	W0103 12:53:25.990298   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:25.990373   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:26.009349   28038 logs.go:284] 0 containers: []
	W0103 12:53:26.009362   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:26.009439   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:26.029460   28038 logs.go:284] 0 containers: []
	W0103 12:53:26.029474   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:26.029548   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:26.047788   28038 logs.go:284] 0 containers: []
	W0103 12:53:26.047802   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:26.047873   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:26.090770   28038 logs.go:284] 0 containers: []
	W0103 12:53:26.090784   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:26.090796   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:26.090808   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:26.103700   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:26.103716   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:26.152659   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:26.152676   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:26.152686   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:26.167490   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:26.167505   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:26.222136   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:26.222152   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:28.759240   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:28.769451   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:28.787400   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.787414   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:28.787485   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:28.806403   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.806418   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:28.806485   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:28.825410   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.825424   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:28.825499   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:28.845710   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.845722   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:28.845787   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:28.864614   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.864627   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:28.864698   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:28.882666   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.882680   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:28.882747   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:28.901565   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.901578   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:28.901656   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:28.920513   28038 logs.go:284] 0 containers: []
	W0103 12:53:28.920524   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:28.920531   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:28.920537   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:28.957414   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:28.957435   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:28.971392   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:28.971417   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:29.026373   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:29.026385   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:29.026393   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:29.040830   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:29.040846   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:31.621970   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:31.633493   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:31.651573   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.651588   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:31.651665   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:31.670249   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.670266   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:31.670336   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:31.688433   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.688446   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:31.688519   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:31.708170   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.708197   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:31.708273   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:31.727035   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.727049   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:31.727129   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:31.746806   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.746819   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:31.746895   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:31.766840   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.766854   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:31.766931   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:31.786074   28038 logs.go:284] 0 containers: []
	W0103 12:53:31.786088   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:31.786095   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:31.786103   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:31.823571   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:31.823586   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:31.836420   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:31.836434   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:31.891446   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:31.891458   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:31.891472   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:31.905793   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:31.905808   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:34.465025   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:34.476565   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:34.495648   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.495662   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:34.495730   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:34.513669   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.513683   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:34.513762   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:34.532860   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.532874   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:34.532948   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:34.551196   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.551219   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:34.551300   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:34.570088   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.570102   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:34.570174   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:34.589020   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.589035   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:34.589104   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:34.609339   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.609353   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:34.609424   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:34.628427   28038 logs.go:284] 0 containers: []
	W0103 12:53:34.628443   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:34.628452   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:34.628461   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:34.683651   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:34.683668   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:34.719759   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:34.719775   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:34.732586   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:34.732600   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:34.795165   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:34.795177   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:34.795189   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:37.310633   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:37.321060   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:37.340174   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.340189   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:37.340280   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:37.359068   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.359081   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:37.359145   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:37.377846   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.377860   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:37.377923   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:37.396727   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.396741   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:37.396817   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:37.416132   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.416146   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:37.416218   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:37.437011   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.437026   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:37.437097   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:37.457971   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.457985   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:37.458054   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:37.476869   28038 logs.go:284] 0 containers: []
	W0103 12:53:37.476889   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:37.476897   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:37.476905   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:37.531622   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:37.531639   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:37.568975   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:37.568995   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:37.584697   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:37.584731   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:37.645364   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:37.645384   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:37.645402   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:40.165607   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:40.177673   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:40.195874   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.195887   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:40.195962   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:40.216154   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.216166   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:40.216234   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:40.234837   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.234850   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:40.234927   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:40.253402   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.253422   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:40.253494   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:40.274121   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.274136   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:40.274201   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:40.293090   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.293104   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:40.293175   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:40.311423   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.311437   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:40.311509   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:40.329884   28038 logs.go:284] 0 containers: []
	W0103 12:53:40.329899   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:40.329916   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:40.329933   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:40.365566   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:40.365582   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:40.378015   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:40.378029   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:40.430204   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:40.430217   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:40.430225   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:40.444679   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:40.444695   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:43.000132   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:43.011535   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:43.029592   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.029607   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:43.029683   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:43.047975   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.047989   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:43.048061   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:43.067541   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.067557   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:43.067622   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:43.086252   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.086267   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:43.086334   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:43.104276   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.104290   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:43.104361   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:43.123860   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.123873   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:43.123946   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:43.144169   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.144183   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:43.144257   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:43.163621   28038 logs.go:284] 0 containers: []
	W0103 12:53:43.163635   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:43.163641   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:43.163648   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:43.178320   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:43.178337   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:43.231983   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:43.231997   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:43.272942   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:43.272965   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:43.287568   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:43.287587   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:43.342351   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:45.843406   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:45.854794   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:45.874025   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.874038   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:45.874113   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:45.892220   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.892238   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:45.892306   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:45.910151   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.910164   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:45.910232   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:45.930204   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.930219   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:45.930296   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:45.948642   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.948655   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:45.948720   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:45.967211   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.967225   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:45.967295   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:45.985460   28038 logs.go:284] 0 containers: []
	W0103 12:53:45.985474   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:45.985566   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:46.004238   28038 logs.go:284] 0 containers: []
	W0103 12:53:46.004253   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:46.004262   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:46.004269   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:46.039799   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:46.039815   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:46.052426   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:46.052440   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:46.103862   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:46.103881   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:46.103889   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:46.118211   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:46.118227   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:48.666679   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:48.677484   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:48.699205   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.699219   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:48.699302   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:48.721259   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.721274   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:48.721350   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:48.744153   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.744168   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:48.744255   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:48.768222   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.768238   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:48.768310   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:48.789703   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.789719   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:48.789790   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:48.812330   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.812345   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:48.812431   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:48.834088   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.834104   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:48.834173   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:48.857121   28038 logs.go:284] 0 containers: []
	W0103 12:53:48.857137   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:48.857145   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:48.857152   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:48.919357   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:48.919373   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:48.962005   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:48.962030   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:48.977658   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:48.977680   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:49.043837   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:49.043853   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:49.043863   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:51.560983   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:51.572451   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:51.592091   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.592105   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:51.592178   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:51.612688   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.619217   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:51.619283   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:51.642210   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.642232   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:51.642308   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:51.665592   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.665606   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:51.665705   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:51.687200   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.687215   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:51.687297   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:51.711514   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.711534   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:51.711605   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:51.736401   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.736422   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:51.736527   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:51.761802   28038 logs.go:284] 0 containers: []
	W0103 12:53:51.761824   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:51.761835   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:51.761849   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:51.834788   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:51.834806   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:51.834818   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:51.853960   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:51.853984   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:51.915726   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:51.915754   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:51.955737   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:51.955758   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:54.470492   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:54.480365   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:54.497969   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.497982   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:54.498051   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:54.517138   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.517152   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:54.517230   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:54.535581   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.535594   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:54.535661   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:54.554799   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.554815   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:54.554884   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:54.573853   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.573867   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:54.573934   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:54.594266   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.594282   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:54.594357   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:54.615071   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.615084   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:54.615157   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:54.637855   28038 logs.go:284] 0 containers: []
	W0103 12:53:54.637874   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:54.637883   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:54.637894   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:54.677863   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:54.677879   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:53:54.691149   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:54.691164   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:54.756673   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:54.756688   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:54.756698   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:54.787721   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:54.787747   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:57.349003   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:53:57.361866   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:53:57.385522   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.385538   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:53:57.385623   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:53:57.405899   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.405913   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:53:57.405981   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:53:57.423761   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.423777   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:53:57.423872   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:53:57.441551   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.441564   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:53:57.441632   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:53:57.468184   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.468201   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:53:57.468290   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:53:57.494997   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.495011   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:53:57.495079   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:53:57.514182   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.514196   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:53:57.514265   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:53:57.532744   28038 logs.go:284] 0 containers: []
	W0103 12:53:57.532758   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:53:57.532765   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:53:57.532772   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:53:57.600461   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:53:57.600474   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:53:57.600481   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:53:57.615251   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:53:57.615268   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:53:57.675695   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:53:57.675712   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:53:57.718190   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:53:57.718209   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:00.231950   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:00.243157   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:00.267576   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.267599   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:00.267680   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:00.294354   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.294384   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:00.294490   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:00.319700   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.319714   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:00.319789   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:00.344970   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.344986   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:00.345062   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:00.372744   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.372763   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:00.372891   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:00.397334   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.397348   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:00.397422   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:00.418289   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.418303   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:00.418371   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:00.437490   28038 logs.go:284] 0 containers: []
	W0103 12:54:00.437506   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:00.437513   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:00.437520   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:00.451000   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:00.451018   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:00.506385   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:00.506398   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:00.506414   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:00.521889   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:00.521904   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:00.574049   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:00.574065   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:03.113631   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:03.125127   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:03.144060   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.144073   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:03.144143   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:03.163565   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.163579   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:03.163647   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:03.182857   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.182870   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:03.182949   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:03.201862   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.201875   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:03.201949   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:03.222600   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.222616   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:03.222683   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:03.243724   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.243741   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:03.243812   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:03.276714   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.276730   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:03.276824   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:03.298501   28038 logs.go:284] 0 containers: []
	W0103 12:54:03.298515   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:03.298522   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:03.298529   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:03.335086   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:03.335101   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:03.347773   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:03.347788   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:03.413250   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:03.413263   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:03.413271   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:03.427741   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:03.427754   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:05.975883   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:05.986391   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:06.006212   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.006237   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:06.006318   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:06.026996   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.027019   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:06.027110   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:06.044900   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.044916   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:06.045015   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:06.066176   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.066188   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:06.066252   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:06.086300   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.086314   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:06.086383   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:06.106371   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.106384   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:06.106443   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:06.125928   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.125942   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:06.126017   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:06.144283   28038 logs.go:284] 0 containers: []
	W0103 12:54:06.144298   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:06.144304   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:06.144311   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:06.157134   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:06.157150   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:06.207166   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:06.207180   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:06.207189   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:06.221918   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:06.221932   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:06.272401   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:06.272415   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:08.812173   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:08.824016   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:08.841668   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.841685   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:08.841760   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:08.860857   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.860871   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:08.860943   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:08.879447   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.879462   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:08.879530   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:08.897535   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.897548   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:08.897622   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:08.916710   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.916722   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:08.916784   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:08.935304   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.935319   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:08.935393   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:08.954989   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.955006   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:08.955132   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:08.976064   28038 logs.go:284] 0 containers: []
	W0103 12:54:08.976077   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:08.976088   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:08.976106   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:09.021331   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:09.021354   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:09.080089   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:09.080106   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:09.136372   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:09.136385   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:09.136393   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:09.151041   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:09.151057   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:11.704458   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:11.715287   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:11.734514   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.734532   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:11.734622   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:11.756035   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.756050   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:11.756120   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:11.786933   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.786947   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:11.787018   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:11.807990   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.808004   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:11.808072   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:11.827655   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.827672   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:11.827749   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:11.846928   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.846941   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:11.847009   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:11.865417   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.865434   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:11.865511   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:11.884859   28038 logs.go:284] 0 containers: []
	W0103 12:54:11.884872   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:11.884879   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:11.884888   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:11.923343   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:11.923360   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:11.936383   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:11.936398   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:11.993386   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:11.993400   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:11.993412   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:12.009289   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:12.009306   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:14.567006   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:14.576790   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:14.595818   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.595833   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:14.595906   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:14.615329   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.615342   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:14.615405   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:14.635461   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.635474   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:14.635545   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:14.654330   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.654343   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:14.654409   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:14.673649   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.673663   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:14.673739   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:14.691972   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.691985   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:14.692060   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:14.713151   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.713164   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:14.713243   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:14.733149   28038 logs.go:284] 0 containers: []
	W0103 12:54:14.733165   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:14.733172   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:14.733181   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:14.780364   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:14.780395   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:14.837990   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:14.838004   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:14.838013   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:14.858857   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:14.858884   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:14.927121   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:14.927140   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:17.466507   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:17.477293   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:17.496199   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.496212   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:17.496284   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:17.514014   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.514029   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:17.514097   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:17.532268   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.532282   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:17.532383   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:17.557427   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.557446   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:17.557533   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:17.582347   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.582363   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:17.582432   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:17.600264   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.600278   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:17.600363   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:17.618817   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.618830   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:17.618899   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:17.637026   28038 logs.go:284] 0 containers: []
	W0103 12:54:17.637044   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:17.637052   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:17.637061   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:17.656597   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:17.656630   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:17.717457   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:17.717470   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:17.717478   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:17.732222   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:17.732236   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:17.796462   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:17.796478   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:20.332023   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:20.342402   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:20.363249   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.363263   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:20.363332   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:20.383375   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.383398   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:20.383495   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:20.404566   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.404579   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:20.404640   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:20.426135   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.426151   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:20.426218   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:20.449759   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.449773   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:20.449846   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:20.478743   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.478756   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:20.478825   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:20.499736   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.499748   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:20.499811   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:20.518921   28038 logs.go:284] 0 containers: []
	W0103 12:54:20.518935   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:20.518943   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:20.518950   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:20.559766   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:20.559783   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:20.573249   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:20.573268   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:20.638593   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:20.638612   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:20.638625   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:20.657677   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:20.657696   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:23.233779   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:23.244303   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:23.267257   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.267288   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:23.267363   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:23.291198   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.291225   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:23.291303   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:23.311958   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.311972   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:23.312045   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:23.335590   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.335605   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:23.335677   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:23.357858   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.357878   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:23.357967   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:23.379856   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.379872   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:23.379952   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:23.402732   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.402746   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:23.402826   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:23.423194   28038 logs.go:284] 0 containers: []
	W0103 12:54:23.423210   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:23.423217   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:23.423225   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:23.464999   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:23.465017   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:23.479806   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:23.479822   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:23.536802   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:23.536815   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:23.536823   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:23.553327   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:23.553343   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:26.119649   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:26.130007   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:26.148456   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.148471   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:26.148540   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:26.169168   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.169181   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:26.169259   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:26.188133   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.188146   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:26.188217   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:26.207595   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.207614   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:26.207698   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:26.226923   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.226936   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:26.227009   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:26.248993   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.249008   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:26.249079   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:26.283006   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.283019   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:26.283096   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:26.301897   28038 logs.go:284] 0 containers: []
	W0103 12:54:26.301912   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:26.301920   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:26.301926   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:26.341683   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:26.341701   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:26.354733   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:26.354746   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:26.412994   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:26.413006   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:26.413015   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:26.427524   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:26.427539   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:28.977844   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:28.989387   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:29.011364   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.011381   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:29.011458   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:29.034060   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.034074   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:29.034143   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:29.055855   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.055869   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:29.055939   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:29.079071   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.079087   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:29.079161   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:29.105299   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.105313   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:29.105387   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:29.128648   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.128692   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:29.128788   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:29.151567   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.151586   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:29.151706   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:29.175324   28038 logs.go:284] 0 containers: []
	W0103 12:54:29.175343   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:29.175354   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:29.175366   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:29.253774   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:29.253792   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:29.253806   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:29.283933   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:29.283952   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:29.348631   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:29.348657   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:29.391393   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:29.391412   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:31.906535   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:31.918852   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:31.942195   28038 logs.go:284] 0 containers: []
	W0103 12:54:31.942210   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:31.942280   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:31.965107   28038 logs.go:284] 0 containers: []
	W0103 12:54:31.965123   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:31.965207   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:31.989073   28038 logs.go:284] 0 containers: []
	W0103 12:54:31.989090   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:31.989166   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:32.010203   28038 logs.go:284] 0 containers: []
	W0103 12:54:32.010218   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:32.010297   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:32.033579   28038 logs.go:284] 0 containers: []
	W0103 12:54:32.033595   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:32.033676   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:32.061282   28038 logs.go:284] 0 containers: []
	W0103 12:54:32.061304   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:32.061415   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:32.088979   28038 logs.go:284] 0 containers: []
	W0103 12:54:32.088995   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:32.089075   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:32.108772   28038 logs.go:284] 0 containers: []
	W0103 12:54:32.108785   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:32.108792   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:32.108800   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:32.121506   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:32.121520   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:32.181595   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:32.181620   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:32.181632   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:32.200273   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:32.200290   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:32.264613   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:32.264637   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:34.812399   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:34.825120   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:34.843005   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.843019   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:34.843085   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:34.861423   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.861436   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:34.861497   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:34.881057   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.881071   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:34.881144   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:34.900100   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.900112   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:34.900180   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:34.919244   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.919257   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:34.919326   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:34.938112   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.938126   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:34.938198   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:34.958634   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.958648   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:34.958724   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:34.977670   28038 logs.go:284] 0 containers: []
	W0103 12:54:34.977683   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:34.977690   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:34.977699   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:35.014946   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:35.014963   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:35.027961   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:35.027975   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:35.077511   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:35.077523   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:35.077530   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:35.093674   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:35.093693   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:37.655191   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:37.666366   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:37.684971   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.684985   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:37.685054   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:37.705126   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.705140   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:37.705206   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:37.723785   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.723800   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:37.723881   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:37.741311   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.741324   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:37.741393   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:37.758812   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.758830   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:37.758907   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:37.776898   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.776918   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:37.776990   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:37.796188   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.796202   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:37.796270   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:37.813782   28038 logs.go:284] 0 containers: []
	W0103 12:54:37.813796   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:37.813804   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:37.813813   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:37.862966   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:37.862981   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:37.898099   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:37.898115   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:37.910588   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:37.910602   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:37.962744   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:37.962759   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:37.962767   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:40.478454   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:40.488978   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:40.511880   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.511896   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:40.511990   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:40.593317   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.593330   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:40.593396   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:40.612063   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.612076   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:40.612147   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:40.631975   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.631989   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:40.632056   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:40.651351   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.651364   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:40.651431   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:40.669020   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.669034   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:40.669097   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:40.687043   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.687055   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:40.687122   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:40.705432   28038 logs.go:284] 0 containers: []
	W0103 12:54:40.705445   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:40.705452   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:40.705459   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:40.740765   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:40.740780   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:40.753087   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:40.753101   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:40.809300   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:40.809311   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:40.809319   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:40.824370   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:40.824386   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:43.380250   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:43.391681   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:43.409910   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.409924   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:43.409994   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:43.427808   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.427822   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:43.427884   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:43.446846   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.446859   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:43.446923   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:43.464455   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.464467   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:43.464535   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:43.482227   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.482242   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:43.482313   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:43.501549   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.501563   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:43.501634   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:43.519897   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.519910   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:43.519972   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:43.539352   28038 logs.go:284] 0 containers: []
	W0103 12:54:43.539365   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:43.539373   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:43.539381   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:43.589001   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:43.589018   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:43.589031   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:43.603521   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:43.603536   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:43.651078   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:43.651093   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:43.686603   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:43.686618   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:46.201651   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:46.212872   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:46.234219   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.234236   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:46.234293   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:46.255728   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.255754   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:46.255817   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:46.285317   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.285330   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:46.285403   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:46.304443   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.304456   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:46.304524   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:46.324910   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.324925   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:46.324994   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:46.344525   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.344540   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:46.344618   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:46.363981   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.363996   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:46.364095   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:46.391086   28038 logs.go:284] 0 containers: []
	W0103 12:54:46.391106   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:46.391117   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:46.391138   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:46.448284   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:46.448299   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:46.448312   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:46.474300   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:46.474315   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:46.524104   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:46.524126   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:46.561341   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:46.561368   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:49.075730   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:49.088246   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:49.107304   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.107318   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:49.107391   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:49.126010   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.126023   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:49.126090   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:49.144819   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.144832   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:49.144899   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:49.163489   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.163503   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:49.163576   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:49.182391   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.182405   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:49.182472   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:49.199773   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.199785   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:49.199850   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:49.219503   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.219518   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:49.219591   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:49.240300   28038 logs.go:284] 0 containers: []
	W0103 12:54:49.240317   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:49.240327   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:49.240337   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:49.254755   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:49.254772   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:49.338351   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:49.338380   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:49.338396   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:49.356563   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:49.356582   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:49.413351   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:49.413365   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:51.994099   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:52.004744   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:52.025619   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.025633   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:52.025701   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:52.044984   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.044997   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:52.045059   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:52.067374   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.067385   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:52.067440   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:52.086077   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.086090   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:52.086160   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:52.105084   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.105097   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:52.105160   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:52.124989   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.125003   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:52.125064   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:52.144725   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.144738   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:52.144806   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:52.165762   28038 logs.go:284] 0 containers: []
	W0103 12:54:52.165776   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:52.165784   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:52.165815   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:52.228056   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:52.228073   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:52.275939   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:52.275960   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:52.295686   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:52.295712   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:52.359043   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:52.359054   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:52.359062   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:54.874105   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:54.897600   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:54.918458   28038 logs.go:284] 0 containers: []
	W0103 12:54:54.918475   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:54.918543   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:54.944493   28038 logs.go:284] 0 containers: []
	W0103 12:54:54.944513   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:54.944607   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:54.972347   28038 logs.go:284] 0 containers: []
	W0103 12:54:54.972376   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:54.972466   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:54.998392   28038 logs.go:284] 0 containers: []
	W0103 12:54:54.998409   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:54.998507   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:55.021408   28038 logs.go:284] 0 containers: []
	W0103 12:54:55.021422   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:55.021489   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:55.039611   28038 logs.go:284] 0 containers: []
	W0103 12:54:55.039626   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:55.039693   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:55.057204   28038 logs.go:284] 0 containers: []
	W0103 12:54:55.057217   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:55.057283   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:55.077214   28038 logs.go:284] 0 containers: []
	W0103 12:54:55.077229   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:55.077237   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:55.077247   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:55.114092   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:55.114110   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:54:55.127033   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:55.127048   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:55.185839   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:55.185852   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:55.185860   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:55.200341   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:55.200356   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:57.762605   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:54:57.772858   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:54:57.791263   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.791277   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:54:57.791355   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:54:57.810403   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.810422   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:54:57.810583   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:54:57.829656   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.829670   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:54:57.829747   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:54:57.849535   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.849549   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:54:57.849621   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:54:57.872345   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.872360   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:54:57.872473   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:54:57.891623   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.891636   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:54:57.891704   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:54:57.909942   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.909955   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:54:57.910027   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:54:57.930795   28038 logs.go:284] 0 containers: []
	W0103 12:54:57.930809   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:54:57.930817   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:54:57.930825   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:54:57.989989   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:54:57.990010   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:54:57.990018   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:54:58.005590   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:54:58.005606   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:54:58.059761   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:54:58.059776   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:54:58.100167   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:54:58.100188   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:00.615124   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:00.624489   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:00.644076   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.644090   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:00.644158   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:00.663547   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.663562   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:00.663630   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:00.685144   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.685160   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:00.685238   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:00.708487   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.708503   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:00.708580   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:00.730311   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.730326   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:00.730405   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:00.753783   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.753798   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:00.753875   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:00.775943   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.775959   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:00.776034   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:00.799078   28038 logs.go:284] 0 containers: []
	W0103 12:55:00.799092   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:00.799100   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:00.799109   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:00.815128   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:00.815143   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:00.883930   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:00.883943   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:00.883952   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:00.901464   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:00.901480   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:00.970732   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:00.970752   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:03.515706   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:03.525445   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:03.544900   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.544914   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:03.544985   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:03.564782   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.564795   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:03.564864   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:03.583865   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.583878   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:03.583944   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:03.602822   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.602836   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:03.602907   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:03.622729   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.622742   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:03.622811   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:03.643876   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.643890   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:03.643961   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:03.662425   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.662437   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:03.662504   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:03.682987   28038 logs.go:284] 0 containers: []
	W0103 12:55:03.683001   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:03.683008   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:03.683015   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:03.697905   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:03.697920   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:03.752959   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:03.752973   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:03.792119   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:03.792137   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:03.806774   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:03.806790   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:03.863134   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:06.364279   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:06.375229   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:06.392541   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.392560   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:06.392666   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:06.413769   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.413789   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:06.413863   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:06.435330   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.435350   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:06.435426   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:06.459097   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.459111   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:06.459189   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:06.479778   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.479792   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:06.479862   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:06.502421   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.502464   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:06.502558   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:06.530670   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.530706   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:06.530859   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:06.555380   28038 logs.go:284] 0 containers: []
	W0103 12:55:06.555394   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:06.555401   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:06.555408   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:06.622172   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:06.637039   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:06.656922   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:06.656939   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:06.724113   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:06.724129   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:06.724137   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:06.743386   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:06.743404   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:09.299157   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:09.310757   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:09.330462   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.330475   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:09.330546   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:09.349311   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.349324   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:09.349391   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:09.369228   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.369242   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:09.369315   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:09.390337   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.390349   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:09.390416   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:09.408279   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.408292   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:09.408360   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:09.427599   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.427612   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:09.427701   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:09.445355   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.445370   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:09.445442   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:09.465065   28038 logs.go:284] 0 containers: []
	W0103 12:55:09.465080   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:09.465089   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:09.465100   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:09.522319   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:09.522334   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:09.522345   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:09.540339   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:09.540355   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:09.609541   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:09.609557   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:09.647471   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:09.647488   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:12.160775   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:12.171315   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:12.190330   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.190344   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:12.190415   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:12.211366   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.211380   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:12.211450   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:12.231129   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.231141   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:12.231208   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:12.250649   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.250662   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:12.250750   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:12.268068   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.268081   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:12.268151   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:12.286308   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.286322   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:12.286401   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:12.305597   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.305612   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:12.305686   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:12.323337   28038 logs.go:284] 0 containers: []
	W0103 12:55:12.323351   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:12.323359   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:12.323366   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:12.336027   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:12.336041   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:12.388098   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:12.388112   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:12.388120   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:12.402658   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:12.402673   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:12.451188   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:12.451205   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:14.995189   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:15.006550   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:15.024067   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.024080   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:15.024147   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:15.042548   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.042562   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:15.042631   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:15.062049   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.062062   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:15.062131   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:15.081105   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.081119   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:15.081187   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:15.099275   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.099288   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:15.099363   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:15.116850   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.116868   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:15.116942   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:15.135086   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.135100   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:15.135166   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:15.154778   28038 logs.go:284] 0 containers: []
	W0103 12:55:15.154800   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:15.154813   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:15.154826   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:15.189826   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:15.189841   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:15.202628   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:15.202646   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:15.270978   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:15.270989   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:15.270997   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:15.285563   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:15.285579   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:17.837645   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:17.848355   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:17.868853   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.868876   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:17.868965   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:17.887416   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.887429   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:17.887501   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:17.906518   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.906532   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:17.906616   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:17.924964   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.924978   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:17.925048   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:17.944327   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.944340   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:17.944408   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:17.963134   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.963146   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:17.963217   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:17.981579   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.981592   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:17.981663   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:17.999630   28038 logs.go:284] 0 containers: []
	W0103 12:55:17.999643   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:17.999650   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:17.999657   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:18.038113   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:18.038130   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:18.050744   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:18.050760   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:18.099867   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:18.099886   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:18.099898   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:18.115110   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:18.115125   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:20.668579   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:20.679986   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:20.697266   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.697281   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:20.697358   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:20.717952   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.717974   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:20.718062   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:20.738036   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.738057   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:20.738153   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:20.756549   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.756571   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:20.756682   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:20.783923   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.783942   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:20.784036   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:20.802764   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.802781   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:20.802859   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:20.822337   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.822350   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:20.822417   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:20.841812   28038 logs.go:284] 0 containers: []
	W0103 12:55:20.841825   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:20.841832   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:20.841839   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:20.878251   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:20.878268   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:20.891456   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:20.891470   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:20.960450   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:20.960462   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:20.960473   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:20.975118   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:20.975133   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:23.531295   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:23.541099   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:23.560879   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.560893   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:23.560959   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:23.579741   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.579754   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:23.579817   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:23.601277   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.601296   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:23.601383   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:23.627088   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.627109   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:23.627246   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:23.648819   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.648833   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:23.648901   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:23.666466   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.666480   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:23.666548   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:23.685044   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.685068   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:23.685146   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:23.706824   28038 logs.go:284] 0 containers: []
	W0103 12:55:23.706848   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:23.706860   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:23.706875   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:23.761466   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:23.761489   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:23.776480   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:23.776495   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:23.850235   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:23.850252   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:23.850260   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:23.864789   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:23.864803   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:26.428689   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:26.438537   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:26.456592   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.456606   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:26.456674   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:26.475972   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.475985   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:26.476054   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:26.494516   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.494528   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:26.494595   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:26.513377   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.513390   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:26.513467   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:26.531352   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.531379   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:26.531450   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:26.550854   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.550867   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:26.550933   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:26.570505   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.570517   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:26.570580   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:26.589056   28038 logs.go:284] 0 containers: []
	W0103 12:55:26.589071   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:26.589082   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:26.589097   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:26.641624   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:26.641637   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:26.641645   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:26.656332   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:26.656347   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:26.709320   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:26.709337   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:26.747838   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:26.747861   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:29.262400   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:29.273871   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:29.292525   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.292538   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:29.292606   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:29.314315   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.314328   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:29.314399   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:29.332142   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.332159   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:29.332249   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:29.351716   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.351731   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:29.351800   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:29.370534   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.370548   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:29.370624   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:29.388785   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.388798   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:29.388878   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:29.408410   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.408424   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:29.408507   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:29.424998   28038 logs.go:284] 0 containers: []
	W0103 12:55:29.425013   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:29.425021   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:29.425033   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:29.460800   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:29.460816   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:29.473764   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:29.473778   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:29.524416   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:29.524429   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:29.524437   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:29.538900   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:29.538913   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:32.093211   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:32.105046   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:55:32.122294   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.122310   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:55:32.122379   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:55:32.140743   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.140757   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:55:32.140825   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:55:32.159744   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.159758   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:55:32.159828   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:55:32.178562   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.178576   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:55:32.178644   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:55:32.197707   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.197722   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:55:32.197790   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:55:32.215757   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.215770   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:55:32.215841   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:55:32.234775   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.234788   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:55:32.234871   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:55:32.253224   28038 logs.go:284] 0 containers: []
	W0103 12:55:32.253244   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:55:32.253255   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:55:32.253263   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:55:32.267823   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:55:32.267838   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 12:55:32.316052   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:55:32.316067   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:55:32.353845   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:55:32.353865   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:55:32.367097   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:55:32.367121   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:55:32.429303   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:55:34.929683   28038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:55:34.940912   28038 kubeadm.go:640] restartCluster took 4m12.169766012s
	W0103 12:55:34.940959   28038 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0103 12:55:34.940974   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:55:35.378198   28038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:55:35.389133   28038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:55:35.397693   28038 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:55:35.397749   28038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:55:35.406225   28038 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:55:35.406254   28038 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:55:35.453827   28038 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:55:35.453870   28038 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:55:35.795524   28038 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:55:35.795606   28038 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:55:35.795684   28038 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:55:35.965733   28038 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:55:35.966607   28038 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:55:35.972849   28038 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:55:36.034103   28038 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:55:36.055606   28038 out.go:204]   - Generating certificates and keys ...
	I0103 12:55:36.055670   28038 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:55:36.055742   28038 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:55:36.055825   28038 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:55:36.055880   28038 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:55:36.055930   28038 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:55:36.055987   28038 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:55:36.056041   28038 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:55:36.056088   28038 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:55:36.056154   28038 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:55:36.056205   28038 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:55:36.056232   28038 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:55:36.056279   28038 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:55:36.211781   28038 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:55:36.430449   28038 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:55:36.519726   28038 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:55:36.866725   28038 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:55:36.867202   28038 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:55:36.890321   28038 out.go:204]   - Booting up control plane ...
	I0103 12:55:36.890431   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:55:36.890537   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:55:36.890596   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:55:36.890678   28038 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:55:36.890841   28038 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:56:16.877223   28038 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:56:16.877699   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:16.877847   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:21.878574   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:21.878724   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:31.879869   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:31.880027   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:51.881181   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:51.881348   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:57:31.883715   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:57:31.883869   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:57:31.883880   28038 kubeadm.go:322] 
	I0103 12:57:31.883907   28038 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:57:31.883932   28038 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:57:31.883938   28038 kubeadm.go:322] 
	I0103 12:57:31.883963   28038 kubeadm.go:322] This error is likely caused by:
	I0103 12:57:31.883992   28038 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:57:31.884072   28038 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:57:31.884079   28038 kubeadm.go:322] 
	I0103 12:57:31.884157   28038 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:57:31.884180   28038 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:57:31.884206   28038 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:57:31.884216   28038 kubeadm.go:322] 
	I0103 12:57:31.884332   28038 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:57:31.884435   28038 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:57:31.884504   28038 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:57:31.884546   28038 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:57:31.884604   28038 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:57:31.884634   28038 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:57:31.886150   28038 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:57:31.886218   28038 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:57:31.886332   28038 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:57:31.886430   28038 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:57:31.886509   28038 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:57:31.886565   28038 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0103 12:57:31.886636   28038 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0103 12:57:31.886666   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:57:32.302539   28038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:57:32.313212   28038 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:57:32.313276   28038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:57:32.321754   28038 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:57:32.321773   28038 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:57:32.370029   28038 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:57:32.370139   28038 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:57:32.626686   28038 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:57:32.626778   28038 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:57:32.626864   28038 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:57:32.799190   28038 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:57:32.800098   28038 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:57:32.806447   28038 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:57:32.877066   28038 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:57:32.898385   28038 out.go:204]   - Generating certificates and keys ...
	I0103 12:57:32.898471   28038 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:57:32.898524   28038 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:57:32.898590   28038 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:57:32.898673   28038 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:57:32.898753   28038 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:57:32.898824   28038 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:57:32.898940   28038 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:57:32.899028   28038 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:57:32.899104   28038 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:57:32.899190   28038 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:57:32.899264   28038 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:57:32.899335   28038 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:57:32.941717   28038 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:57:33.108106   28038 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:57:33.190846   28038 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:57:33.297984   28038 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:57:33.299073   28038 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:57:33.320504   28038 out.go:204]   - Booting up control plane ...
	I0103 12:57:33.320605   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:57:33.320692   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:57:33.320768   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:57:33.320882   28038 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:57:33.321074   28038 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:58:13.308842   28038 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:58:13.309543   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:13.309762   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:18.311492   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:18.311762   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:28.313440   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:28.313646   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:48.315168   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:48.315398   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:59:28.318112   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:59:28.318393   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:59:28.318413   28038 kubeadm.go:322] 
	I0103 12:59:28.318517   28038 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:59:28.318565   28038 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:59:28.318571   28038 kubeadm.go:322] 
	I0103 12:59:28.318614   28038 kubeadm.go:322] This error is likely caused by:
	I0103 12:59:28.318661   28038 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:59:28.318786   28038 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:59:28.318795   28038 kubeadm.go:322] 
	I0103 12:59:28.318919   28038 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:59:28.318958   28038 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:59:28.318991   28038 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:59:28.318997   28038 kubeadm.go:322] 
	I0103 12:59:28.319111   28038 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:59:28.319239   28038 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:59:28.319400   28038 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:59:28.319483   28038 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:59:28.319560   28038 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:59:28.319600   28038 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:59:28.321025   28038 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:59:28.321106   28038 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:59:28.321235   28038 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:59:28.321393   28038 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:59:28.321471   28038 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:59:28.321530   28038 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0103 12:59:28.321593   28038 kubeadm.go:406] StartCluster complete in 8m5.573207777s
	I0103 12:59:28.321672   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:59:28.340506   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.340521   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:59:28.340589   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:59:28.358910   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.358925   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:59:28.358995   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:59:28.378961   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.378975   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:59:28.379047   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:59:28.398954   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.398969   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:59:28.399039   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:59:28.418263   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.418277   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:59:28.418356   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:59:28.437709   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.437723   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:59:28.437793   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:59:28.456942   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.456957   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:59:28.457027   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:59:28.473952   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.473967   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:59:28.473974   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:59:28.473982   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:59:28.509227   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:59:28.509242   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:59:28.521928   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:59:28.521944   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:59:28.576892   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:59:28.576904   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:59:28.576912   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:59:28.591634   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:59:28.591650   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0103 12:59:28.645197   28038 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0103 12:59:28.645223   28038 out.go:239] * 
	* 
	W0103 12:59:28.645272   28038 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:59:28.645290   28038 out.go:239] * 
	* 
	W0103 12:59:28.645945   28038 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:59:28.707891   28038 out.go:177] 
	W0103 12:59:28.765796   28038 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:59:28.765856   28038 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0103 12:59:28.765879   28038 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0103 12:59:28.786908   28038 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-079000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330830,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:51:07.721836081Z",
	            "FinishedAt": "2024-01-03T20:51:04.945962022Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22690e506998a488020031708015bc4c616d9aded4ec18ee021cebb06f55f6c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22690e506998",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "b9f1eeb15eb3bcf34443d22df5a9f0f604e4242a88fe4cc278cbd366a5c2f69a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (378.553451ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25: (1.405085246s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-236000 sudo                                   | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:45 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p false-236000 sudo systemctl                         | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST |                     |
	|         | status crio --all --full                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-236000 sudo systemctl                         | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:45 PST |
	|         | cat crio --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-236000 sudo find                              | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:45 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-236000 sudo crio                              | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:45 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p false-236000                                        | false-236000           | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:45 PST |
	| start   | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:45 PST | 03 Jan 24 12:48 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-742000             | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:48 PST | 03 Jan 24 12:48 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:48 PST | 03 Jan 24 12:48 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-742000                  | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:48 PST | 03 Jan 24 12:48 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:48 PST | 03 Jan 24 12:54 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079000        | old-k8s-version-079000 | jenkins | v1.32.0 | 03 Jan 24 12:49 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-079000                              | old-k8s-version-079000 | jenkins | v1.32.0 | 03 Jan 24 12:51 PST | 03 Jan 24 12:51 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079000             | old-k8s-version-079000 | jenkins | v1.32.0 | 03 Jan 24 12:51 PST | 03 Jan 24 12:51 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-079000                              | old-k8s-version-079000 | jenkins | v1.32.0 | 03 Jan 24 12:51 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-742000 image list                           | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	| delete  | -p no-preload-742000                                   | no-preload-742000      | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	| start   | -p embed-certs-362000                                  | embed-certs-362000     | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:55 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-362000            | embed-certs-362000     | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-362000                                  | embed-certs-362000     | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-362000                 | embed-certs-362000     | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-362000                                  | embed-certs-362000     | jenkins | v1.32.0 | 03 Jan 24 12:55 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 12:55:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 12:55:48.853350   28512 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:55:48.853639   28512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:55:48.853645   28512 out.go:309] Setting ErrFile to fd 2...
	I0103 12:55:48.853649   28512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:55:48.853831   28512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:55:48.855278   28512 out.go:303] Setting JSON to false
	I0103 12:55:48.878407   28512 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8718,"bootTime":1704306630,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:55:48.878491   28512 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:55:48.900585   28512 out.go:177] * [embed-certs-362000] minikube v1.32.0 on Darwin 14.2
	I0103 12:55:48.942227   28512 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:55:48.942317   28512 notify.go:220] Checking for updates...
	I0103 12:55:48.985347   28512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:55:49.006225   28512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:55:49.048344   28512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:55:49.090148   28512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:55:49.111486   28512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:55:49.132998   28512 config.go:182] Loaded profile config "embed-certs-362000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:55:49.133758   28512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:55:49.190055   28512 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:55:49.190221   28512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:55:49.314462   28512 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:55:49.287976757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:55:49.335991   28512 out.go:177] * Using the docker driver based on existing profile
	I0103 12:55:49.356731   28512 start.go:298] selected driver: docker
	I0103 12:55:49.356756   28512 start.go:902] validating driver "docker" against &{Name:embed-certs-362000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-362000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:55:49.356859   28512 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:55:49.361162   28512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:55:49.461784   28512 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:55:49.451445762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:55:49.462028   28512 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 12:55:49.462086   28512 cni.go:84] Creating CNI manager for ""
	I0103 12:55:49.462100   28512 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:55:49.462111   28512 start_flags.go:323] config:
	{Name:embed-certs-362000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:55:49.504874   28512 out.go:177] * Starting control plane node embed-certs-362000 in cluster embed-certs-362000
	I0103 12:55:49.525980   28512 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 12:55:49.547596   28512 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 12:55:49.568691   28512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 12:55:49.568796   28512 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0103 12:55:49.568797   28512 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 12:55:49.568828   28512 cache.go:56] Caching tarball of preloaded images
	I0103 12:55:49.569045   28512 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 12:55:49.569064   28512 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0103 12:55:49.569270   28512 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/config.json ...
	I0103 12:55:49.621310   28512 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 12:55:49.621414   28512 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 12:55:49.621437   28512 cache.go:194] Successfully downloaded all kic artifacts
	I0103 12:55:49.621494   28512 start.go:365] acquiring machines lock for embed-certs-362000: {Name:mk06d6f1d23811bda935c0077e8a7bf26464688f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 12:55:49.621585   28512 start.go:369] acquired machines lock for "embed-certs-362000" in 68.307µs
	I0103 12:55:49.621608   28512 start.go:96] Skipping create...Using existing machine configuration
	I0103 12:55:49.621617   28512 fix.go:54] fixHost starting: 
	I0103 12:55:49.621847   28512 cli_runner.go:164] Run: docker container inspect embed-certs-362000 --format={{.State.Status}}
	I0103 12:55:49.674117   28512 fix.go:102] recreateIfNeeded on embed-certs-362000: state=Stopped err=<nil>
	W0103 12:55:49.674149   28512 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 12:55:49.695782   28512 out.go:177] * Restarting existing docker container for "embed-certs-362000" ...
	I0103 12:55:49.738584   28512 cli_runner.go:164] Run: docker start embed-certs-362000
	I0103 12:55:49.988708   28512 cli_runner.go:164] Run: docker container inspect embed-certs-362000 --format={{.State.Status}}
	I0103 12:55:50.049623   28512 kic.go:430] container "embed-certs-362000" state is running.
	I0103 12:55:50.050264   28512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362000
	I0103 12:55:50.106216   28512 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/config.json ...
	I0103 12:55:50.106639   28512 machine.go:88] provisioning docker machine ...
	I0103 12:55:50.106662   28512 ubuntu.go:169] provisioning hostname "embed-certs-362000"
	I0103 12:55:50.106740   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:50.171174   28512 main.go:141] libmachine: Using SSH client type: native
	I0103 12:55:50.171655   28512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61799 <nil> <nil>}
	I0103 12:55:50.171675   28512 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-362000 && echo "embed-certs-362000" | sudo tee /etc/hostname
	I0103 12:55:50.173474   28512 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0103 12:55:53.303014   28512 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-362000
	
	I0103 12:55:53.303104   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:53.355192   28512 main.go:141] libmachine: Using SSH client type: native
	I0103 12:55:53.355486   28512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61799 <nil> <nil>}
	I0103 12:55:53.355499   28512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-362000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-362000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-362000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 12:55:53.473533   28512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:55:53.473559   28512 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 12:55:53.473576   28512 ubuntu.go:177] setting up certificates
	I0103 12:55:53.473592   28512 provision.go:83] configureAuth start
	I0103 12:55:53.473660   28512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362000
	I0103 12:55:53.525173   28512 provision.go:138] copyHostCerts
	I0103 12:55:53.525281   28512 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 12:55:53.525291   28512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 12:55:53.525421   28512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 12:55:53.526333   28512 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 12:55:53.526341   28512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 12:55:53.526429   28512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 12:55:53.526617   28512 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 12:55:53.526624   28512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 12:55:53.526699   28512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 12:55:53.526860   28512 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.embed-certs-362000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-362000]
	I0103 12:55:53.729075   28512 provision.go:172] copyRemoteCerts
	I0103 12:55:53.729156   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 12:55:53.729211   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:53.786976   28512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/embed-certs-362000/id_rsa Username:docker}
	I0103 12:55:53.873921   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 12:55:53.896090   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0103 12:55:53.916499   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 12:55:53.936954   28512 provision.go:86] duration metric: configureAuth took 463.333963ms
	I0103 12:55:53.936968   28512 ubuntu.go:193] setting minikube options for container-runtime
	I0103 12:55:53.937117   28512 config.go:182] Loaded profile config "embed-certs-362000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:55:53.937186   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:53.988815   28512 main.go:141] libmachine: Using SSH client type: native
	I0103 12:55:53.989128   28512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61799 <nil> <nil>}
	I0103 12:55:53.989138   28512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 12:55:54.108810   28512 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 12:55:54.108825   28512 ubuntu.go:71] root file system type: overlay
	I0103 12:55:54.108930   28512 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 12:55:54.109011   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.160778   28512 main.go:141] libmachine: Using SSH client type: native
	I0103 12:55:54.161074   28512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61799 <nil> <nil>}
	I0103 12:55:54.161122   28512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 12:55:54.292004   28512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 12:55:54.292150   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.344332   28512 main.go:141] libmachine: Using SSH client type: native
	I0103 12:55:54.344618   28512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 61799 <nil> <nil>}
	I0103 12:55:54.344631   28512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 12:55:54.469420   28512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 12:55:54.469440   28512 machine.go:91] provisioned docker machine in 4.362680199s
	I0103 12:55:54.469446   28512 start.go:300] post-start starting for "embed-certs-362000" (driver="docker")
	I0103 12:55:54.469463   28512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 12:55:54.469538   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 12:55:54.469608   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.521526   28512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/embed-certs-362000/id_rsa Username:docker}
	I0103 12:55:54.609894   28512 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 12:55:54.613845   28512 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 12:55:54.613869   28512 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 12:55:54.613876   28512 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 12:55:54.613882   28512 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 12:55:54.613895   28512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 12:55:54.613991   28512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 12:55:54.614179   28512 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 12:55:54.614375   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 12:55:54.622383   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:55:54.642441   28512 start.go:303] post-start completed in 172.980514ms
	I0103 12:55:54.642522   28512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:55:54.642581   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.694359   28512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/embed-certs-362000/id_rsa Username:docker}
	I0103 12:55:54.777204   28512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 12:55:54.782070   28512 fix.go:56] fixHost completed within 5.160318951s
	I0103 12:55:54.782082   28512 start.go:83] releasing machines lock for "embed-certs-362000", held for 5.160357191s
	I0103 12:55:54.782159   28512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362000
	I0103 12:55:54.834272   28512 ssh_runner.go:195] Run: cat /version.json
	I0103 12:55:54.834303   28512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 12:55:54.834350   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.834376   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:54.888312   28512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/embed-certs-362000/id_rsa Username:docker}
	I0103 12:55:54.888503   28512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/embed-certs-362000/id_rsa Username:docker}
	I0103 12:55:55.092527   28512 ssh_runner.go:195] Run: systemctl --version
	I0103 12:55:55.097474   28512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 12:55:55.102907   28512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 12:55:55.119547   28512 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 12:55:55.119644   28512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 12:55:55.128534   28512 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 12:55:55.128556   28512 start.go:475] detecting cgroup driver to use...
	I0103 12:55:55.128574   28512 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:55:55.128697   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:55:55.143490   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0103 12:55:55.152806   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 12:55:55.162179   28512 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 12:55:55.162239   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 12:55:55.171788   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:55:55.181148   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 12:55:55.190406   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 12:55:55.199602   28512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 12:55:55.208478   28512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 12:55:55.218132   28512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 12:55:55.226452   28512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 12:55:55.234772   28512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:55:55.285187   28512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 12:55:55.364738   28512 start.go:475] detecting cgroup driver to use...
	I0103 12:55:55.364759   28512 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 12:55:55.364836   28512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 12:55:55.390044   28512 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 12:55:55.390121   28512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 12:55:55.401539   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 12:55:55.417922   28512 ssh_runner.go:195] Run: which cri-dockerd
	I0103 12:55:55.427412   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 12:55:55.436186   28512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 12:55:55.454305   28512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 12:55:55.566825   28512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 12:55:55.662546   28512 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 12:55:55.662630   28512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 12:55:55.679443   28512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:55:55.764547   28512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 12:55:56.038673   28512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 12:55:56.097136   28512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0103 12:55:56.149286   28512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 12:55:56.213687   28512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:55:56.267902   28512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0103 12:55:56.292526   28512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 12:55:56.349589   28512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0103 12:55:56.429263   28512 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0103 12:55:56.429349   28512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0103 12:55:56.433830   28512 start.go:543] Will wait 60s for crictl version
	I0103 12:55:56.433916   28512 ssh_runner.go:195] Run: which crictl
	I0103 12:55:56.437952   28512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 12:55:56.484364   28512 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0103 12:55:56.484447   28512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:55:56.508336   28512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 12:55:56.556247   28512 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0103 12:55:56.556369   28512 cli_runner.go:164] Run: docker exec -t embed-certs-362000 dig +short host.docker.internal
	I0103 12:55:56.677711   28512 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 12:55:56.677811   28512 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 12:55:56.682399   28512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:55:56.692820   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:56.744336   28512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 12:55:56.744427   28512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:55:56.765998   28512 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0103 12:55:56.766021   28512 docker.go:601] Images already preloaded, skipping extraction
	I0103 12:55:56.766104   28512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 12:55:56.786342   28512 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0103 12:55:56.786371   28512 cache_images.go:84] Images are preloaded, skipping loading
	I0103 12:55:56.786450   28512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 12:55:56.835390   28512 cni.go:84] Creating CNI manager for ""
	I0103 12:55:56.835407   28512 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:55:56.835421   28512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 12:55:56.835438   28512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-362000 NodeName:embed-certs-362000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 12:55:56.835552   28512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-362000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 12:55:56.835611   28512 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-362000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 12:55:56.835674   28512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 12:55:56.844281   28512 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 12:55:56.844342   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 12:55:56.852586   28512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0103 12:55:56.867839   28512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 12:55:56.883198   28512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0103 12:55:56.898517   28512 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 12:55:56.902562   28512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 12:55:56.912773   28512 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000 for IP: 192.168.67.2
	I0103 12:55:56.912802   28512 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:55:56.912999   28512 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 12:55:56.913074   28512 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 12:55:56.913179   28512 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/client.key
	I0103 12:55:56.913284   28512 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/apiserver.key.c7fa3a9e
	I0103 12:55:56.913356   28512 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/proxy-client.key
	I0103 12:55:56.913571   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 12:55:56.913615   28512 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 12:55:56.913624   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 12:55:56.913658   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 12:55:56.913689   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 12:55:56.913717   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 12:55:56.913786   28512 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 12:55:56.914378   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 12:55:56.934463   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 12:55:56.954828   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 12:55:56.975729   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/embed-certs-362000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 12:55:56.997295   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 12:55:57.021920   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 12:55:57.044939   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 12:55:57.065963   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 12:55:57.086662   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 12:55:57.107436   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 12:55:57.129316   28512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 12:55:57.149938   28512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 12:55:57.165332   28512 ssh_runner.go:195] Run: openssl version
	I0103 12:55:57.170776   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 12:55:57.179863   28512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:55:57.183860   28512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:55:57.183917   28512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 12:55:57.190347   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 12:55:57.198710   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 12:55:57.207727   28512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 12:55:57.211988   28512 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 12:55:57.212030   28512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 12:55:57.218529   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 12:55:57.226961   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 12:55:57.236037   28512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 12:55:57.240136   28512 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 12:55:57.240181   28512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 12:55:57.246810   28512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 12:55:57.255123   28512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 12:55:57.259161   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 12:55:57.265331   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 12:55:57.271549   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 12:55:57.277880   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 12:55:57.284299   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 12:55:57.290587   28512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 12:55:57.296749   28512 kubeadm.go:404] StartCluster: {Name:embed-certs-362000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-362000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:55:57.296861   28512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:55:57.315199   28512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 12:55:57.323790   28512 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 12:55:57.323807   28512 kubeadm.go:636] restartCluster start
	I0103 12:55:57.323865   28512 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 12:55:57.331849   28512 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:57.331945   28512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-362000
	I0103 12:55:57.384272   28512 kubeconfig.go:135] verify returned: extract IP: "embed-certs-362000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:55:57.384439   28512 kubeconfig.go:146] "embed-certs-362000" context is missing from /Users/jenkins/minikube-integration/17885-10646/kubeconfig - will repair!
	I0103 12:55:57.384805   28512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 12:55:57.386356   28512 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 12:55:57.395114   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:57.395174   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:57.404500   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:57.895590   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:57.895807   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:57.907090   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:58.396060   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:58.396164   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:58.407621   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:58.895590   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:58.895702   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:58.907225   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:59.396451   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:59.396605   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:59.408011   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:55:59.895286   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:55:59.895472   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:55:59.906824   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:00.395549   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:00.395684   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:00.407075   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:00.895814   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:00.895923   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:00.907260   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:01.396325   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:01.396549   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:01.408191   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:01.897379   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:01.897548   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:01.909063   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:02.397471   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:02.397571   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:02.409081   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:02.897181   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:02.897355   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:02.909139   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:03.395515   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:03.395709   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:03.407492   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:03.896264   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:03.896385   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:03.907842   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:04.395412   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:04.395534   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:04.406821   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:04.896954   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:04.897084   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:04.908898   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:05.395606   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:05.395747   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:05.406982   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:05.896349   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:05.896474   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:05.907933   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:06.395467   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:06.395639   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:06.407352   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:06.895503   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:06.895629   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:06.906929   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:07.397484   28512 api_server.go:166] Checking apiserver status ...
	I0103 12:56:07.397619   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 12:56:07.409089   28512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:07.409103   28512 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 12:56:07.409122   28512 kubeadm.go:1135] stopping kube-system containers ...
	I0103 12:56:07.409209   28512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 12:56:07.430168   28512 docker.go:469] Stopping containers: [eb0db09c7233 4176c7281d3b db2a0f055d5e cc48294fdc14 9e2d02bc44a5 b66832a6e81f b09d893e1c02 ae71863e6e1e ce9de433d6d4 cae395a6df00 cbb594d69c27 3e4ad075d1cd 892197f3e99e ae3e80ff09a4 c6c45a6173ae]
	I0103 12:56:07.430263   28512 ssh_runner.go:195] Run: docker stop eb0db09c7233 4176c7281d3b db2a0f055d5e cc48294fdc14 9e2d02bc44a5 b66832a6e81f b09d893e1c02 ae71863e6e1e ce9de433d6d4 cae395a6df00 cbb594d69c27 3e4ad075d1cd 892197f3e99e ae3e80ff09a4 c6c45a6173ae
	I0103 12:56:07.450675   28512 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 12:56:07.462184   28512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:56:07.470564   28512 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  3 20:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  3 20:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  3 20:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  3 20:55 /etc/kubernetes/scheduler.conf
	
	I0103 12:56:07.470627   28512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0103 12:56:07.478873   28512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0103 12:56:07.487207   28512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0103 12:56:07.495434   28512 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:07.495504   28512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0103 12:56:07.503568   28512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0103 12:56:07.511770   28512 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:56:07.511821   28512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0103 12:56:07.519726   28512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 12:56:07.528244   28512 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 12:56:07.528260   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:07.574640   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:08.195184   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:08.316785   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:08.369595   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:08.455909   28512 api_server.go:52] waiting for apiserver process to appear ...
	I0103 12:56:08.456015   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:56:08.956180   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:56:09.456512   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:56:09.956346   28512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:56:09.968530   28512 api_server.go:72] duration metric: took 1.51258108s to wait for apiserver process to appear ...
	I0103 12:56:09.968546   28512 api_server.go:88] waiting for apiserver healthz status ...
	I0103 12:56:09.968575   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:12.344612   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 12:56:12.344632   28512 api_server.go:103] status: https://127.0.0.1:61798/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 12:56:12.344646   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:12.436837   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:56:12.436871   28512 api_server.go:103] status: https://127.0.0.1:61798/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:56:12.470727   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:12.537866   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:56:12.537897   28512 api_server.go:103] status: https://127.0.0.1:61798/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:56:12.969465   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:13.036705   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:56:13.036736   28512 api_server.go:103] status: https://127.0.0.1:61798/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:56:13.469423   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:13.538455   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 12:56:13.538485   28512 api_server.go:103] status: https://127.0.0.1:61798/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 12:56:13.969436   28512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61798/healthz ...
	I0103 12:56:13.976687   28512 api_server.go:279] https://127.0.0.1:61798/healthz returned 200:
	ok
	I0103 12:56:14.039245   28512 api_server.go:141] control plane version: v1.28.4
	I0103 12:56:14.039276   28512 api_server.go:131] duration metric: took 4.070610975s to wait for apiserver health ...
	I0103 12:56:14.039286   28512 cni.go:84] Creating CNI manager for ""
	I0103 12:56:14.039302   28512 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 12:56:14.078402   28512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 12:56:14.115040   28512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 12:56:14.124766   28512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 12:56:14.140786   28512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 12:56:14.148631   28512 system_pods.go:59] 8 kube-system pods found
	I0103 12:56:14.148650   28512 system_pods.go:61] "coredns-5dd5756b68-p7b8r" [83e77ee6-c8fb-4426-8527-ba2c01092067] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 12:56:14.148657   28512 system_pods.go:61] "etcd-embed-certs-362000" [e2099ac8-6111-4555-b9f1-360b41dedd35] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 12:56:14.148663   28512 system_pods.go:61] "kube-apiserver-embed-certs-362000" [bb071e6e-711c-43d3-a19f-a4a385782291] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 12:56:14.148675   28512 system_pods.go:61] "kube-controller-manager-embed-certs-362000" [3cde1614-9d5e-40bf-b685-c22ff7c295c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 12:56:14.148680   28512 system_pods.go:61] "kube-proxy-crc4z" [5cb072e7-918e-4a6e-9f5f-73a4b0c459fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 12:56:14.148689   28512 system_pods.go:61] "kube-scheduler-embed-certs-362000" [c07cea7a-a320-448f-a7ef-5879c3ff900c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 12:56:14.148695   28512 system_pods.go:61] "metrics-server-57f55c9bc5-mhwbp" [5eef1875-4f6b-4709-b745-2d02f4673dbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 12:56:14.148698   28512 system_pods.go:61] "storage-provisioner" [ae033f2a-ce79-4e76-998d-fe7f07648bff] Running
	I0103 12:56:14.148704   28512 system_pods.go:74] duration metric: took 7.90687ms to wait for pod list to return data ...
	I0103 12:56:14.148710   28512 node_conditions.go:102] verifying NodePressure condition ...
	I0103 12:56:14.151745   28512 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 12:56:14.151760   28512 node_conditions.go:123] node cpu capacity is 12
	I0103 12:56:14.151770   28512 node_conditions.go:105] duration metric: took 3.055152ms to run NodePressure ...
	I0103 12:56:14.151781   28512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 12:56:14.278854   28512 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 12:56:14.283127   28512 kubeadm.go:787] kubelet initialised
	I0103 12:56:14.283139   28512 kubeadm.go:788] duration metric: took 4.269988ms waiting for restarted kubelet to initialise ...
	I0103 12:56:14.283145   28512 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 12:56:14.288399   28512 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:16.295171   28512 pod_ready.go:102] pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:18.296242   28512 pod_ready.go:102] pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:16.877223   28038 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:56:16.877699   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:16.877847   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:20.795626   28512 pod_ready.go:102] pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:21.294937   28512 pod_ready.go:92] pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:21.294959   28512 pod_ready.go:81] duration metric: took 7.006364943s waiting for pod "coredns-5dd5756b68-p7b8r" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:21.294971   28512 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:23.301903   28512 pod_ready.go:92] pod "etcd-embed-certs-362000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:23.301916   28512 pod_ready.go:81] duration metric: took 2.006883104s waiting for pod "etcd-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:23.301923   28512 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:21.878574   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:21.878724   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:25.307992   28512 pod_ready.go:102] pod "kube-apiserver-embed-certs-362000" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:27.310940   28512 pod_ready.go:102] pod "kube-apiserver-embed-certs-362000" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:28.838234   28512 pod_ready.go:92] pod "kube-apiserver-embed-certs-362000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:28.838250   28512 pod_ready.go:81] duration metric: took 5.536178864s waiting for pod "kube-apiserver-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.838257   28512 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.843579   28512 pod_ready.go:92] pod "kube-controller-manager-embed-certs-362000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:28.843591   28512 pod_ready.go:81] duration metric: took 5.329054ms waiting for pod "kube-controller-manager-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.843599   28512 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crc4z" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.848824   28512 pod_ready.go:92] pod "kube-proxy-crc4z" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:28.848836   28512 pod_ready.go:81] duration metric: took 5.231713ms waiting for pod "kube-proxy-crc4z" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.848842   28512 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.853420   28512 pod_ready.go:92] pod "kube-scheduler-embed-certs-362000" in "kube-system" namespace has status "Ready":"True"
	I0103 12:56:28.853430   28512 pod_ready.go:81] duration metric: took 4.576903ms waiting for pod "kube-scheduler-embed-certs-362000" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:28.853436   28512 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace to be "Ready" ...
	I0103 12:56:30.860592   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:33.360436   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:31.879869   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:31.880027   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:35.361871   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:37.861117   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:40.361853   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:42.860274   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:44.861494   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:46.862285   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:49.362016   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:51.861721   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:51.881181   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:56:51.881348   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:56:54.361522   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:56.860682   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:56:58.864221   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:01.361365   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:03.361812   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:05.861522   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:08.361944   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:10.860667   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:12.862353   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:15.361869   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:17.862117   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:19.862850   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:22.361874   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:24.363136   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:26.862653   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:31.883715   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:57:31.883869   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:57:31.883880   28038 kubeadm.go:322] 
	I0103 12:57:31.883907   28038 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:57:31.883932   28038 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:57:31.883938   28038 kubeadm.go:322] 
	I0103 12:57:31.883963   28038 kubeadm.go:322] This error is likely caused by:
	I0103 12:57:31.883992   28038 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:57:31.884072   28038 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:57:31.884079   28038 kubeadm.go:322] 
	I0103 12:57:31.884157   28038 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:57:31.884180   28038 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:57:31.884206   28038 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:57:31.884216   28038 kubeadm.go:322] 
	I0103 12:57:31.884332   28038 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:57:31.884435   28038 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:57:31.884504   28038 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:57:31.884546   28038 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:57:31.884604   28038 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:57:31.884634   28038 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:57:31.886150   28038 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:57:31.886218   28038 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:57:31.886332   28038 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:57:31.886430   28038 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:57:31.886509   28038 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:57:31.886565   28038 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0103 12:57:31.886636   28038 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0103 12:57:31.886666   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0103 12:57:32.302539   28038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:57:32.313212   28038 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 12:57:32.313276   28038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 12:57:32.321754   28038 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 12:57:32.321773   28038 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 12:57:32.370029   28038 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0103 12:57:32.370139   28038 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 12:57:32.626686   28038 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 12:57:32.626778   28038 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 12:57:32.626864   28038 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 12:57:32.799190   28038 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 12:57:32.800098   28038 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 12:57:32.806447   28038 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0103 12:57:32.877066   28038 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 12:57:32.898385   28038 out.go:204]   - Generating certificates and keys ...
	I0103 12:57:32.898471   28038 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 12:57:32.898524   28038 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 12:57:32.898590   28038 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 12:57:32.898673   28038 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 12:57:32.898753   28038 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 12:57:32.898824   28038 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 12:57:32.898940   28038 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 12:57:32.899028   28038 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 12:57:32.899104   28038 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 12:57:32.899190   28038 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 12:57:32.899264   28038 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 12:57:32.899335   28038 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 12:57:32.941717   28038 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 12:57:33.108106   28038 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 12:57:33.190846   28038 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 12:57:33.297984   28038 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 12:57:33.299073   28038 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 12:57:29.362190   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:31.362988   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:33.320504   28038 out.go:204]   - Booting up control plane ...
	I0103 12:57:33.320605   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 12:57:33.320692   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 12:57:33.320768   28038 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 12:57:33.320882   28038 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 12:57:33.321074   28038 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 12:57:33.862901   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:36.362771   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:38.863888   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:41.361931   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:43.861407   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:46.362789   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:48.363296   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:50.862742   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:53.363134   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:55.363510   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:57:57.862831   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:00.362942   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:02.363691   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:04.862599   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:07.365255   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:09.863621   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:12.362744   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:13.308842   28038 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0103 12:58:13.309543   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:13.309762   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:14.863508   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:17.364333   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:18.311492   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:18.311762   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:19.864017   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:22.363198   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:24.862553   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:26.862661   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:28.313440   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:28.313646   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:28.864773   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:31.364086   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:33.364273   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:35.366545   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:37.864310   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:40.364598   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:42.863585   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:44.864767   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:47.362780   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:48.315168   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:58:48.315398   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:58:49.365412   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:51.864318   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:54.364937   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:56.865115   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:58:59.364320   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:01.365332   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:03.863703   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:06.363418   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:08.365022   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:10.864486   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:13.363717   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:15.364353   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:17.365088   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:19.365260   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:21.865297   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:28.318112   28038 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0103 12:59:28.318393   28038 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0103 12:59:28.318413   28038 kubeadm.go:322] 
	I0103 12:59:28.318517   28038 kubeadm.go:322] Unfortunately, an error has occurred:
	I0103 12:59:28.318565   28038 kubeadm.go:322] 	timed out waiting for the condition
	I0103 12:59:28.318571   28038 kubeadm.go:322] 
	I0103 12:59:28.318614   28038 kubeadm.go:322] This error is likely caused by:
	I0103 12:59:28.318661   28038 kubeadm.go:322] 	- The kubelet is not running
	I0103 12:59:28.318786   28038 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0103 12:59:28.318795   28038 kubeadm.go:322] 
	I0103 12:59:28.318919   28038 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0103 12:59:28.318958   28038 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0103 12:59:28.318991   28038 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0103 12:59:28.318997   28038 kubeadm.go:322] 
	I0103 12:59:28.319111   28038 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0103 12:59:28.319239   28038 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0103 12:59:28.319400   28038 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0103 12:59:28.319483   28038 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0103 12:59:28.319560   28038 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0103 12:59:28.319600   28038 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0103 12:59:28.321025   28038 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0103 12:59:28.321106   28038 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0103 12:59:28.321235   28038 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0103 12:59:28.321393   28038 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 12:59:28.321471   28038 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0103 12:59:28.321530   28038 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0103 12:59:28.321593   28038 kubeadm.go:406] StartCluster complete in 8m5.573207777s
	I0103 12:59:28.321672   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0103 12:59:28.340506   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.340521   28038 logs.go:286] No container was found matching "kube-apiserver"
	I0103 12:59:28.340589   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0103 12:59:28.358910   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.358925   28038 logs.go:286] No container was found matching "etcd"
	I0103 12:59:28.358995   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0103 12:59:28.378961   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.378975   28038 logs.go:286] No container was found matching "coredns"
	I0103 12:59:28.379047   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0103 12:59:28.398954   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.398969   28038 logs.go:286] No container was found matching "kube-scheduler"
	I0103 12:59:28.399039   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0103 12:59:28.418263   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.418277   28038 logs.go:286] No container was found matching "kube-proxy"
	I0103 12:59:28.418356   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0103 12:59:28.437709   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.437723   28038 logs.go:286] No container was found matching "kube-controller-manager"
	I0103 12:59:28.437793   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0103 12:59:28.456942   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.456957   28038 logs.go:286] No container was found matching "kindnet"
	I0103 12:59:28.457027   28038 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0103 12:59:28.473952   28038 logs.go:284] 0 containers: []
	W0103 12:59:28.473967   28038 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0103 12:59:28.473974   28038 logs.go:123] Gathering logs for kubelet ...
	I0103 12:59:28.473982   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 12:59:28.509227   28038 logs.go:123] Gathering logs for dmesg ...
	I0103 12:59:28.509242   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 12:59:28.521928   28038 logs.go:123] Gathering logs for describe nodes ...
	I0103 12:59:28.521944   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0103 12:59:28.576892   28038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0103 12:59:28.576904   28038 logs.go:123] Gathering logs for Docker ...
	I0103 12:59:28.576912   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0103 12:59:28.591634   28038 logs.go:123] Gathering logs for container status ...
	I0103 12:59:28.591650   28038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0103 12:59:28.645197   28038 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0103 12:59:28.645223   28038 out.go:239] * 
	W0103 12:59:28.645272   28038 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:59:28.645290   28038 out.go:239] * 
	W0103 12:59:28.645945   28038 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 12:59:28.707891   28038 out.go:177] 
	W0103 12:59:28.765796   28038 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0103 12:59:28.765856   28038 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0103 12:59:28.765879   28038 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0103 12:59:28.786908   28038 out.go:177] 
	I0103 12:59:24.364995   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	I0103 12:59:26.366733   28512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhwbp" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.494961645Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.530984627Z" level=info msg="Loading containers: done."
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538868695Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538925879Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565689331Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565733584Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:13 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.768677948Z" level=info msg="Processing signal 'terminated'"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769687840Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769897997Z" level=info msg="Daemon shutdown complete"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.770248509Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: docker.service: Deactivated successfully.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Starting Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.823082271Z" level=info msg="Starting up"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.858304014Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.974500460Z" level=info msg="Loading containers: start."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.058816401Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.095279502Z" level=info msg="Loading containers: done."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103107473Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103167170Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129628965Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129721749Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:21 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-01-03T20:59:30Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[Jan 3 20:28] hrtimer: interrupt took 2402524 ns
	
	
	==> kernel <==
	 20:59:30 up  1:57,  0 users,  load average: 0.36, 0.66, 0.93
	Linux old-k8s-version-079000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 03 20:59:28 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 20:59:29 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 152.
	Jan 03 20:59:29 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 20:59:29 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: I0103 20:59:29.558457   20061 server.go:410] Version: v1.16.0
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: I0103 20:59:29.558689   20061 plugins.go:100] No cloud provider specified.
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: I0103 20:59:29.558698   20061 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: I0103 20:59:29.560239   20061 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: W0103 20:59:29.560834   20061 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: W0103 20:59:29.560890   20061 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 20:59:29 old-k8s-version-079000 kubelet[20061]: F0103 20:59:29.560909   20061 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 20:59:29 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 20:59:29 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 20:59:30 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Jan 03 20:59:30 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 20:59:30 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: I0103 20:59:30.326302   20149 server.go:410] Version: v1.16.0
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: I0103 20:59:30.326488   20149 plugins.go:100] No cloud provider specified.
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: I0103 20:59:30.326497   20149 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: I0103 20:59:30.328137   20149 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: W0103 20:59:30.328782   20149 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: W0103 20:59:30.328841   20149 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 20:59:30 old-k8s-version-079000 kubelet[20149]: F0103 20:59:30.328862   20149 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 20:59:30 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 20:59:30 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:59:30.495102   28635 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (382.77009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (504.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0103 12:59:35.673069   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:59:39.652838   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 12:59:43.174252   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:59:48.517746   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 12:59:58.640079   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:00:16.463125   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:01:49.175265   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:02:17.249461   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 13:02:19.055009   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:02:26.095181   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:02:56.898216   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:03:12.632380   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:03:26.602517   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:03:40.297782   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:03:42.101790   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:03:54.284329   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:04:02.128480   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 13:04:08.928792   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:04:39.658356   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:04:43.182773   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:04:58.647208   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:05:16.470771   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:05:25.185619   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 13:05:31.976792   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:05:59.959289   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:06:02.710922   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 13:06:03.024155   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:06:39.517440   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:06:49.183689   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:07:17.257671   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 13:07:19.062305   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:07:56.903770   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:08:12.639962   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:08:26.609807   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (392.95679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330830,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:51:07.721836081Z",
	            "FinishedAt": "2024-01-03T20:51:04.945962022Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22690e506998a488020031708015bc4c616d9aded4ec18ee021cebb06f55f6c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22690e506998",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "b9f1eeb15eb3bcf34443d22df5a9f0f604e4242a88fe4cc278cbd366a5c2f69a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (391.110738ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25: (1.347473143s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-079000        | old-k8s-version-079000       | jenkins | v1.32.0 | 03 Jan 24 12:49 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079000                              | old-k8s-version-079000       | jenkins | v1.32.0 | 03 Jan 24 12:51 PST | 03 Jan 24 12:51 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079000             | old-k8s-version-079000       | jenkins | v1.32.0 | 03 Jan 24 12:51 PST | 03 Jan 24 12:51 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079000                              | old-k8s-version-079000       | jenkins | v1.32.0 | 03 Jan 24 12:51 PST |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | no-preload-742000 image list                           | no-preload-742000            | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-742000                                   | no-preload-742000            | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-742000                                   | no-preload-742000            | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-742000                                   | no-preload-742000            | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	| delete  | -p no-preload-742000                                   | no-preload-742000            | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:54 PST |
	| start   | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 12:54 PST | 03 Jan 24 12:55 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-362000            | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-362000                 | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 12:55 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 12:55 PST | 03 Jan 24 13:01 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | embed-certs-362000 image list                          | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	| delete  | -p                                                     | disable-driver-mounts-174000 | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | disable-driver-mounts-174000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:02 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-213000  | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:02 PST | 03 Jan 24 13:02 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:02 PST | 03 Jan 24 13:03 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-213000       | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:03 PST | 03 Jan 24 13:03 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:03 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 13:03:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 13:03:03.147576   29024 out.go:296] Setting OutFile to fd 1 ...
	I0103 13:03:03.147763   29024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 13:03:03.147770   29024 out.go:309] Setting ErrFile to fd 2...
	I0103 13:03:03.147774   29024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 13:03:03.147989   29024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 13:03:03.149430   29024 out.go:303] Setting JSON to false
	I0103 13:03:03.171764   29024 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9153,"bootTime":1704306630,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 13:03:03.171862   29024 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 13:03:03.193833   29024 out.go:177] * [default-k8s-diff-port-213000] minikube v1.32.0 on Darwin 14.2
	I0103 13:03:03.236351   29024 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 13:03:03.236376   29024 notify.go:220] Checking for updates...
	I0103 13:03:03.278216   29024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:03:03.299287   29024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 13:03:03.320224   29024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 13:03:03.341226   29024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 13:03:03.362249   29024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 13:03:03.384091   29024 config.go:182] Loaded profile config "default-k8s-diff-port-213000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 13:03:03.384955   29024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 13:03:03.441745   29024 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 13:03:03.441912   29024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 13:03:03.541226   29024 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 21:03:03.531358756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 13:03:03.562467   29024 out.go:177] * Using the docker driver based on existing profile
	I0103 13:03:03.604126   29024 start.go:298] selected driver: docker
	I0103 13:03:03.604160   29024 start.go:902] validating driver "docker" against &{Name:default-k8s-diff-port-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-213000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:03:03.604266   29024 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 13:03:03.608757   29024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 13:03:03.712461   29024 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 21:03:03.702030541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 13:03:03.712685   29024 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 13:03:03.712747   29024 cni.go:84] Creating CNI manager for ""
	I0103 13:03:03.712761   29024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:03:03.712771   29024 start_flags.go:323] config:
	{Name:default-k8s-diff-port-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-213000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:03:03.754852   29024 out.go:177] * Starting control plane node default-k8s-diff-port-213000 in cluster default-k8s-diff-port-213000
	I0103 13:03:03.775917   29024 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 13:03:03.798106   29024 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 13:03:03.840044   29024 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 13:03:03.840112   29024 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0103 13:03:03.840126   29024 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 13:03:03.840136   29024 cache.go:56] Caching tarball of preloaded images
	I0103 13:03:03.840311   29024 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 13:03:03.840322   29024 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0103 13:03:03.840477   29024 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/config.json ...
	I0103 13:03:03.894190   29024 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 13:03:03.894219   29024 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 13:03:03.894251   29024 cache.go:194] Successfully downloaded all kic artifacts
	I0103 13:03:03.894292   29024 start.go:365] acquiring machines lock for default-k8s-diff-port-213000: {Name:mk84d996ef7ccf3790782079c258d26b84f43baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 13:03:03.894381   29024 start.go:369] acquired machines lock for "default-k8s-diff-port-213000" in 67.243µs
	I0103 13:03:03.894404   29024 start.go:96] Skipping create...Using existing machine configuration
	I0103 13:03:03.894412   29024 fix.go:54] fixHost starting: 
	I0103 13:03:03.894645   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:03:03.946073   29024 fix.go:102] recreateIfNeeded on default-k8s-diff-port-213000: state=Stopped err=<nil>
	W0103 13:03:03.946132   29024 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 13:03:03.967784   29024 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-213000" ...
	I0103 13:03:04.009311   29024 cli_runner.go:164] Run: docker start default-k8s-diff-port-213000
	I0103 13:03:04.264206   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:03:04.319814   29024 kic.go:430] container "default-k8s-diff-port-213000" state is running.
	I0103 13:03:04.320420   29024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213000
	I0103 13:03:04.376091   29024 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/config.json ...
	I0103 13:03:04.376504   29024 machine.go:88] provisioning docker machine ...
	I0103 13:03:04.376527   29024 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-213000"
	I0103 13:03:04.376594   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:04.441439   29024 main.go:141] libmachine: Using SSH client type: native
	I0103 13:03:04.441773   29024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62171 <nil> <nil>}
	I0103 13:03:04.441786   29024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-213000 && echo "default-k8s-diff-port-213000" | sudo tee /etc/hostname
	I0103 13:03:04.442831   29024 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0103 13:03:07.574271   29024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-213000
	
	I0103 13:03:07.574386   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:07.626432   29024 main.go:141] libmachine: Using SSH client type: native
	I0103 13:03:07.626722   29024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62171 <nil> <nil>}
	I0103 13:03:07.626737   29024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 13:03:07.744600   29024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 13:03:07.744628   29024 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 13:03:07.744648   29024 ubuntu.go:177] setting up certificates
	I0103 13:03:07.744665   29024 provision.go:83] configureAuth start
	I0103 13:03:07.744742   29024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213000
	I0103 13:03:07.797929   29024 provision.go:138] copyHostCerts
	I0103 13:03:07.798059   29024 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 13:03:07.798070   29024 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 13:03:07.798219   29024 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 13:03:07.798460   29024 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 13:03:07.798467   29024 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 13:03:07.798549   29024 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 13:03:07.798732   29024 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 13:03:07.798739   29024 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 13:03:07.798815   29024 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 13:03:07.798984   29024 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-213000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-213000]
	I0103 13:03:07.909251   29024 provision.go:172] copyRemoteCerts
	I0103 13:03:07.909312   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 13:03:07.909368   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:07.960662   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:03:08.050919   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 13:03:08.071189   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 13:03:08.091700   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 13:03:08.111936   29024 provision.go:86] duration metric: configureAuth took 367.246847ms
	I0103 13:03:08.111953   29024 ubuntu.go:193] setting minikube options for container-runtime
	I0103 13:03:08.112127   29024 config.go:182] Loaded profile config "default-k8s-diff-port-213000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 13:03:08.112198   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:08.179157   29024 main.go:141] libmachine: Using SSH client type: native
	I0103 13:03:08.179458   29024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62171 <nil> <nil>}
	I0103 13:03:08.179466   29024 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 13:03:08.297570   29024 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 13:03:08.297601   29024 ubuntu.go:71] root file system type: overlay
	I0103 13:03:08.297703   29024 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 13:03:08.297792   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:08.349865   29024 main.go:141] libmachine: Using SSH client type: native
	I0103 13:03:08.350176   29024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62171 <nil> <nil>}
	I0103 13:03:08.350229   29024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 13:03:08.480627   29024 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 13:03:08.480753   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:08.535713   29024 main.go:141] libmachine: Using SSH client type: native
	I0103 13:03:08.536035   29024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62171 <nil> <nil>}
	I0103 13:03:08.536049   29024 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 13:03:08.661335   29024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 13:03:08.661355   29024 machine.go:91] provisioned docker machine in 4.284729546s
	I0103 13:03:08.661362   29024 start.go:300] post-start starting for "default-k8s-diff-port-213000" (driver="docker")
	I0103 13:03:08.661372   29024 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 13:03:08.661438   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 13:03:08.661491   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:08.713362   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:03:08.802263   29024 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 13:03:08.806028   29024 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 13:03:08.806053   29024 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 13:03:08.806060   29024 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 13:03:08.806066   29024 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 13:03:08.806076   29024 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 13:03:08.806175   29024 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 13:03:08.806372   29024 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 13:03:08.806588   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 13:03:08.814763   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 13:03:08.834823   29024 start.go:303] post-start completed in 173.448026ms
	I0103 13:03:08.834910   29024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 13:03:08.834966   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:08.887176   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:03:08.973707   29024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 13:03:08.978587   29024 fix.go:56] fixHost completed within 5.084043537s
	I0103 13:03:08.978600   29024 start.go:83] releasing machines lock for "default-k8s-diff-port-213000", held for 5.084081068s
	I0103 13:03:08.978680   29024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-213000
	I0103 13:03:09.030158   29024 ssh_runner.go:195] Run: cat /version.json
	I0103 13:03:09.030170   29024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 13:03:09.030251   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:09.030254   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:09.083644   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:03:09.083787   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:03:09.274709   29024 ssh_runner.go:195] Run: systemctl --version
	I0103 13:03:09.280010   29024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 13:03:09.285630   29024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 13:03:09.302573   29024 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 13:03:09.302642   29024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 13:03:09.311299   29024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 13:03:09.311313   29024 start.go:475] detecting cgroup driver to use...
	I0103 13:03:09.311331   29024 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 13:03:09.311446   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 13:03:09.326756   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0103 13:03:09.336226   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 13:03:09.345579   29024 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 13:03:09.345640   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 13:03:09.355187   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 13:03:09.364835   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 13:03:09.374506   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 13:03:09.383935   29024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 13:03:09.392893   29024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 13:03:09.402371   29024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 13:03:09.410499   29024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 13:03:09.418712   29024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:03:09.471466   29024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 13:03:09.557338   29024 start.go:475] detecting cgroup driver to use...
	I0103 13:03:09.557358   29024 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 13:03:09.557422   29024 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 13:03:09.579464   29024 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 13:03:09.579533   29024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 13:03:09.591164   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 13:03:09.607350   29024 ssh_runner.go:195] Run: which cri-dockerd
	I0103 13:03:09.611952   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 13:03:09.621853   29024 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 13:03:09.648602   29024 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 13:03:09.777718   29024 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 13:03:09.867577   29024 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 13:03:09.867684   29024 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 13:03:09.883759   29024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:03:09.974739   29024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 13:03:10.240353   29024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 13:03:10.298442   29024 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0103 13:03:10.354502   29024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 13:03:10.407480   29024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:03:10.460554   29024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0103 13:03:10.487863   29024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:03:10.541880   29024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0103 13:03:10.618210   29024 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0103 13:03:10.618303   29024 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0103 13:03:10.622908   29024 start.go:543] Will wait 60s for crictl version
	I0103 13:03:10.622976   29024 ssh_runner.go:195] Run: which crictl
	I0103 13:03:10.626954   29024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 13:03:10.678206   29024 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0103 13:03:10.678289   29024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 13:03:10.703618   29024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 13:03:10.753178   29024 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0103 13:03:10.753283   29024 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-213000 dig +short host.docker.internal
	I0103 13:03:10.870097   29024 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 13:03:10.870202   29024 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 13:03:10.875270   29024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 13:03:10.885893   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:10.937957   29024 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 13:03:10.938037   29024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 13:03:10.958573   29024 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0103 13:03:10.958599   29024 docker.go:601] Images already preloaded, skipping extraction
	I0103 13:03:10.958674   29024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 13:03:10.979846   29024 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0103 13:03:10.979862   29024 cache_images.go:84] Images are preloaded, skipping loading
	I0103 13:03:10.979950   29024 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 13:03:11.027962   29024 cni.go:84] Creating CNI manager for ""
	I0103 13:03:11.027981   29024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:03:11.027996   29024 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 13:03:11.028021   29024 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-213000 NodeName:default-k8s-diff-port-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 13:03:11.028182   29024 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-213000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 13:03:11.028266   29024 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-213000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0103 13:03:11.028332   29024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 13:03:11.036808   29024 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 13:03:11.036879   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 13:03:11.045125   29024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0103 13:03:11.060337   29024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 13:03:11.076196   29024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0103 13:03:11.091854   29024 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 13:03:11.095992   29024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 13:03:11.106360   29024 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000 for IP: 192.168.67.2
	I0103 13:03:11.106381   29024 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:03:11.106562   29024 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 13:03:11.106631   29024 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 13:03:11.106711   29024 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.key
	I0103 13:03:11.106792   29024 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/apiserver.key.c7fa3a9e
	I0103 13:03:11.106862   29024 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/proxy-client.key
	I0103 13:03:11.107079   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 13:03:11.107124   29024 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 13:03:11.107146   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 13:03:11.107198   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 13:03:11.107233   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 13:03:11.107262   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 13:03:11.107331   29024 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 13:03:11.107925   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 13:03:11.128493   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 13:03:11.148992   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 13:03:11.169314   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 13:03:11.189634   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 13:03:11.210251   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 13:03:11.231061   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 13:03:11.251349   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 13:03:11.271582   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 13:03:11.291698   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 13:03:11.312165   29024 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 13:03:11.332892   29024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 13:03:11.348630   29024 ssh_runner.go:195] Run: openssl version
	I0103 13:03:11.354276   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 13:03:11.363145   29024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 13:03:11.367122   29024 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 13:03:11.367173   29024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 13:03:11.373440   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 13:03:11.381659   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 13:03:11.390564   29024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:03:11.394669   29024 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:03:11.394710   29024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:03:11.401152   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 13:03:11.409610   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 13:03:11.418413   29024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 13:03:11.422386   29024 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 13:03:11.422435   29024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 13:03:11.428716   29024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 13:03:11.436966   29024 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 13:03:11.440950   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 13:03:11.447160   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 13:03:11.453462   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 13:03:11.460086   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 13:03:11.466434   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 13:03:11.472698   29024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 13:03:11.479647   29024 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-213000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:03:11.479763   29024 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 13:03:11.499559   29024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 13:03:11.508632   29024 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 13:03:11.508650   29024 kubeadm.go:636] restartCluster start
	I0103 13:03:11.508713   29024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 13:03:11.518234   29024 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:11.518315   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:03:11.571867   29024 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-213000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:03:11.572059   29024 kubeconfig.go:146] "default-k8s-diff-port-213000" context is missing from /Users/jenkins/minikube-integration/17885-10646/kubeconfig - will repair!
	I0103 13:03:11.572416   29024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:03:11.573966   29024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 13:03:11.582850   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:11.582905   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:11.592358   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:12.083117   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:12.083301   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:12.095033   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:12.584282   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:12.584554   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:12.595395   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:13.083538   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:13.083696   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:13.094908   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:13.584374   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:13.584542   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:13.596143   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:14.084253   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:14.084466   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:14.095944   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:14.583246   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:14.583367   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:14.594517   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:15.083013   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:15.083142   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:15.093277   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:15.583746   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:15.583860   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:15.595214   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:16.084658   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:16.084797   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:16.096676   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:16.583658   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:16.583778   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:16.595108   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:17.083186   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:17.083268   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:17.093716   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:17.583261   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:17.583348   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:17.593006   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:18.085145   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:18.085293   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:18.097498   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:18.584081   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:18.584226   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:18.595664   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:19.083336   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:19.083431   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:19.093610   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:19.583146   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:19.583267   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:19.593696   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:20.083798   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:20.083898   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:20.094404   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:20.583555   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:20.583614   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:20.593113   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:21.084610   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:21.084793   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:21.095414   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:21.583511   29024 api_server.go:166] Checking apiserver status ...
	I0103 13:03:21.583621   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:03:21.594901   29024 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:21.594916   29024 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 13:03:21.594932   29024 kubeadm.go:1135] stopping kube-system containers ...
	I0103 13:03:21.595012   29024 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 13:03:21.613970   29024 docker.go:469] Stopping containers: [acc848903d93 f7e0145eca47 472a03a1fd41 77229cc4fdf3 ae157a5b1093 db4e435d7fcb 800f26d6a364 d9f4fa5000e5 cef393f718f6 5f95e8e27d96 8f3eb0bab57b 8dfd6b3b1f04 bae4f81e3e01 e6b9aa5f521a ebe1cbfefd1c]
	I0103 13:03:21.614062   29024 ssh_runner.go:195] Run: docker stop acc848903d93 f7e0145eca47 472a03a1fd41 77229cc4fdf3 ae157a5b1093 db4e435d7fcb 800f26d6a364 d9f4fa5000e5 cef393f718f6 5f95e8e27d96 8f3eb0bab57b 8dfd6b3b1f04 bae4f81e3e01 e6b9aa5f521a ebe1cbfefd1c
	I0103 13:03:21.634804   29024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 13:03:21.646425   29024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 13:03:21.654968   29024 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  3 21:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  3 21:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  3 21:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  3 21:01 /etc/kubernetes/scheduler.conf
	
	I0103 13:03:21.655027   29024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0103 13:03:21.663313   29024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0103 13:03:21.671418   29024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0103 13:03:21.679453   29024 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:21.679511   29024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0103 13:03:21.687640   29024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0103 13:03:21.695876   29024 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:03:21.695958   29024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0103 13:03:21.704049   29024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 13:03:21.712855   29024 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 13:03:21.712872   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:21.759304   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:22.324202   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:22.446081   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:22.496992   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:22.573860   29024 api_server.go:52] waiting for apiserver process to appear ...
	I0103 13:03:22.573952   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:03:23.076055   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:03:23.575423   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:03:23.656213   29024 api_server.go:72] duration metric: took 1.082323565s to wait for apiserver process to appear ...
	I0103 13:03:23.656231   29024 api_server.go:88] waiting for apiserver healthz status ...
	I0103 13:03:23.656276   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:23.657778   29024 api_server.go:269] stopped: https://127.0.0.1:62175/healthz: Get "https://127.0.0.1:62175/healthz": EOF
	I0103 13:03:24.156623   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:25.751566   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 13:03:25.751591   29024 api_server.go:103] status: https://127.0.0.1:62175/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 13:03:25.751604   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:25.858410   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 13:03:25.858445   29024 api_server.go:103] status: https://127.0.0.1:62175/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 13:03:26.156551   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:26.163503   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:03:26.163520   29024 api_server.go:103] status: https://127.0.0.1:62175/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:03:26.656473   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:26.664076   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:03:26.664101   29024 api_server.go:103] status: https://127.0.0.1:62175/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:03:27.156545   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:03:27.164266   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 200:
	ok
	I0103 13:03:27.175470   29024 api_server.go:141] control plane version: v1.28.4
	I0103 13:03:27.175495   29024 api_server.go:131] duration metric: took 3.519165888s to wait for apiserver health ...
	I0103 13:03:27.175506   29024 cni.go:84] Creating CNI manager for ""
	I0103 13:03:27.175526   29024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:03:27.198538   29024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 13:03:27.222805   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 13:03:27.262474   29024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 13:03:27.362628   29024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 13:03:27.375141   29024 system_pods.go:59] 8 kube-system pods found
	I0103 13:03:27.375171   29024 system_pods.go:61] "coredns-5dd5756b68-9hx9z" [21833745-3224-4ab8-9c63-3b52fe0658a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 13:03:27.375181   29024 system_pods.go:61] "etcd-default-k8s-diff-port-213000" [7e2ad463-eff3-498a-ac56-e432f5b91595] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 13:03:27.375191   29024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-213000" [8da54ceb-8834-45ba-86e1-d59849993121] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 13:03:27.375208   29024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-213000" [de553f0f-0d64-4c3a-b01e-2967f70815be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 13:03:27.375218   29024 system_pods.go:61] "kube-proxy-cn7zl" [acdf16d0-e807-4a30-a5c1-4e6f6110c710] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 13:03:27.375226   29024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-213000" [a7185d8f-f3c0-4eff-883c-7d64a5fe40c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 13:03:27.375239   29024 system_pods.go:61] "metrics-server-57f55c9bc5-kd8g7" [7ab88e45-b27f-4270-8c38-31ea83b0ebf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 13:03:27.375258   29024 system_pods.go:61] "storage-provisioner" [362157b9-5ce5-4e5f-96b9-8056977fa040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 13:03:27.375267   29024 system_pods.go:74] duration metric: took 12.620464ms to wait for pod list to return data ...
	I0103 13:03:27.375282   29024 node_conditions.go:102] verifying NodePressure condition ...
	I0103 13:03:27.453568   29024 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 13:03:27.453591   29024 node_conditions.go:123] node cpu capacity is 12
	I0103 13:03:27.453605   29024 node_conditions.go:105] duration metric: took 78.313509ms to run NodePressure ...
	I0103 13:03:27.453627   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:03:28.071879   29024 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 13:03:28.078207   29024 kubeadm.go:787] kubelet initialised
	I0103 13:03:28.078221   29024 kubeadm.go:788] duration metric: took 6.325972ms waiting for restarted kubelet to initialise ...
	I0103 13:03:28.078228   29024 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 13:03:28.084885   29024 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hx9z" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:30.091528   29024 pod_ready.go:92] pod "coredns-5dd5756b68-9hx9z" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:30.091542   29024 pod_ready.go:81] duration metric: took 2.0065879s waiting for pod "coredns-5dd5756b68-9hx9z" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:30.091548   29024 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:32.100124   29024 pod_ready.go:102] pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:34.601256   29024 pod_ready.go:102] pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:37.099752   29024 pod_ready.go:102] pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:39.098395   29024 pod_ready.go:92] pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:39.098407   29024 pod_ready.go:81] duration metric: took 9.006615987s waiting for pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:39.098415   29024 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.606845   29024 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:40.606858   29024 pod_ready.go:81] duration metric: took 1.508399156s waiting for pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.606865   29024 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.612009   29024 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:40.612020   29024 pod_ready.go:81] duration metric: took 5.149269ms waiting for pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.612026   29024 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cn7zl" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.616920   29024 pod_ready.go:92] pod "kube-proxy-cn7zl" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:40.616930   29024 pod_ready.go:81] duration metric: took 4.899186ms waiting for pod "kube-proxy-cn7zl" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:40.616936   29024 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:42.624436   29024 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:03:42.624449   29024 pod_ready.go:81] duration metric: took 2.007455658s waiting for pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:42.624457   29024 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace to be "Ready" ...
	I0103 13:03:44.632211   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:47.130852   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:49.131476   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:51.632327   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:54.132078   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:56.133225   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:03:58.632794   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:00.633880   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:03.158128   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:05.630759   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:07.632536   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:10.134178   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:12.633129   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:15.133732   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:17.634553   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:20.134039   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:22.632524   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:25.132044   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:27.132354   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:29.634012   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:31.635302   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:34.133756   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:36.634293   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:39.132458   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:41.634549   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:44.135191   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:46.634419   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:49.135468   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:51.635392   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:53.659495   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:56.134055   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:04:58.134343   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:00.134887   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:02.633997   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:04.634530   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:07.133746   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:09.134238   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:11.135683   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:13.634896   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:16.135434   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:18.713074   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:21.133627   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:23.133732   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:25.137463   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:27.634184   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:30.135608   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:32.633768   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:34.634632   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:36.636652   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:39.134642   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:41.136101   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:43.634900   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:45.636929   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:48.135105   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:50.636695   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:53.135488   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:55.635415   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:57.635589   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:05:59.637017   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:02.136848   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:04.137493   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:06.635941   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:08.637744   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:11.137707   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:13.636182   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:16.137205   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:18.637941   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:21.136145   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:23.637309   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:26.135446   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:28.136484   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:30.636279   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:32.638587   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:34.663523   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:37.136034   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:39.636774   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:42.136893   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:44.635774   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:47.138164   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:49.637743   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:52.137479   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:54.636198   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:56.636417   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:06:58.637810   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:01.138650   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:03.636561   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:05.636887   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:07.637607   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:10.138751   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:12.638995   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:15.139006   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:17.638106   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:19.638250   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:21.638541   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:23.639571   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:26.139755   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:28.637360   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:30.638587   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:32.639765   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:35.137827   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:37.138020   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:39.138531   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:41.636307   29024 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace has status "Ready":"False"
	I0103 13:07:42.630798   29024 pod_ready.go:81] duration metric: took 4m0.000154292s waiting for pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace to be "Ready" ...
	E0103 13:07:42.630837   29024 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kd8g7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0103 13:07:42.630863   29024 pod_ready.go:38] duration metric: took 4m14.546073007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 13:07:42.630897   29024 kubeadm.go:640] restartCluster took 4m31.115258056s
	W0103 13:07:42.630950   29024 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0103 13:07:42.630972   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0103 13:07:49.410265   29024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.779099839s)
	I0103 13:07:49.410325   29024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 13:07:49.420935   29024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 13:07:49.429443   29024 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 13:07:49.429493   29024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 13:07:49.437677   29024 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 13:07:49.437706   29024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 13:07:49.478459   29024 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 13:07:49.478507   29024 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 13:07:49.598882   29024 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 13:07:49.598971   29024 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 13:07:49.599047   29024 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 13:07:49.882583   29024 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 13:07:49.908232   29024 out.go:204]   - Generating certificates and keys ...
	I0103 13:07:49.908294   29024 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 13:07:49.908363   29024 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 13:07:49.908428   29024 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0103 13:07:49.908483   29024 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0103 13:07:49.908549   29024 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0103 13:07:49.908597   29024 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0103 13:07:49.908665   29024 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0103 13:07:49.908719   29024 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0103 13:07:49.908783   29024 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0103 13:07:49.908843   29024 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0103 13:07:49.908874   29024 kubeadm.go:322] [certs] Using the existing "sa" key
	I0103 13:07:49.908920   29024 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 13:07:50.027027   29024 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 13:07:50.203653   29024 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 13:07:50.377668   29024 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 13:07:50.524660   29024 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 13:07:50.524966   29024 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 13:07:50.526470   29024 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 13:07:50.547935   29024 out.go:204]   - Booting up control plane ...
	I0103 13:07:50.548000   29024 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 13:07:50.548053   29024 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 13:07:50.548106   29024 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 13:07:50.548186   29024 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 13:07:50.548306   29024 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 13:07:50.548347   29024 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 13:07:50.608080   29024 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 13:07:55.611391   29024 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003055 seconds
	I0103 13:07:55.611577   29024 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 13:07:55.621291   29024 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 13:07:56.136847   29024 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 13:07:56.137005   29024 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-213000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 13:07:56.646161   29024 kubeadm.go:322] [bootstrap-token] Using token: lp6ny7.t0ggokj2emugzi6c
	I0103 13:07:56.684975   29024 out.go:204]   - Configuring RBAC rules ...
	I0103 13:07:56.685073   29024 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 13:07:56.689431   29024 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 13:07:56.730345   29024 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 13:07:56.733662   29024 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 13:07:56.736566   29024 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 13:07:56.739340   29024 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 13:07:56.748455   29024 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 13:07:56.876719   29024 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 13:07:57.159908   29024 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 13:07:57.161065   29024 kubeadm.go:322] 
	I0103 13:07:57.161142   29024 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 13:07:57.161164   29024 kubeadm.go:322] 
	I0103 13:07:57.161354   29024 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 13:07:57.161375   29024 kubeadm.go:322] 
	I0103 13:07:57.161418   29024 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 13:07:57.161494   29024 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 13:07:57.161563   29024 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 13:07:57.161573   29024 kubeadm.go:322] 
	I0103 13:07:57.161655   29024 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 13:07:57.161673   29024 kubeadm.go:322] 
	I0103 13:07:57.161734   29024 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 13:07:57.161749   29024 kubeadm.go:322] 
	I0103 13:07:57.161832   29024 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 13:07:57.161946   29024 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 13:07:57.162065   29024 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 13:07:57.162079   29024 kubeadm.go:322] 
	I0103 13:07:57.162178   29024 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 13:07:57.162300   29024 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 13:07:57.162315   29024 kubeadm.go:322] 
	I0103 13:07:57.162446   29024 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token lp6ny7.t0ggokj2emugzi6c \
	I0103 13:07:57.162606   29024 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77ff46b7fd6ee56bcdeaaed8388fa545a7b87f928fa39b7d2cc5c40f4d10849b \
	I0103 13:07:57.162645   29024 kubeadm.go:322] 	--control-plane 
	I0103 13:07:57.162671   29024 kubeadm.go:322] 
	I0103 13:07:57.162799   29024 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 13:07:57.162813   29024 kubeadm.go:322] 
	I0103 13:07:57.162935   29024 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token lp6ny7.t0ggokj2emugzi6c \
	I0103 13:07:57.163094   29024 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77ff46b7fd6ee56bcdeaaed8388fa545a7b87f928fa39b7d2cc5c40f4d10849b 
	I0103 13:07:57.165110   29024 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0103 13:07:57.165221   29024 kubeadm.go:322] 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0103 13:07:57.165444   29024 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 13:07:57.165464   29024 cni.go:84] Creating CNI manager for ""
	I0103 13:07:57.165478   29024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:07:57.204290   29024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 13:07:57.246612   29024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 13:07:57.266522   29024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 13:07:57.285646   29024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 13:07:57.285723   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:57.285725   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=default-k8s-diff-port-213000 minikube.k8s.io/updated_at=2024_01_03T13_07_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:57.294517   29024 ops.go:34] apiserver oom_adj: -16
	I0103 13:07:57.409655   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:57.909779   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:58.409751   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:58.911861   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:59.410219   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:07:59.910986   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:00.410278   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:00.910579   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:01.410042   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:01.910836   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:02.410667   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:02.911068   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:03.410457   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:03.910058   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:04.410023   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:04.910000   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:05.411229   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:05.910479   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:06.410933   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:06.910004   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:07.410790   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:07.910098   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:08.410738   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:08.911019   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:09.410247   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:09.911462   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:10.410961   29024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 13:08:10.591869   29024 kubeadm.go:1088] duration metric: took 13.305874032s to wait for elevateKubeSystemPrivileges.
	I0103 13:08:10.591888   29024 kubeadm.go:406] StartCluster complete in 4m59.104549815s
	I0103 13:08:10.591915   29024 settings.go:142] acquiring lock: {Name:mk777823310df39752595be0f41f425a2c8eb047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:08:10.592001   29024 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:08:10.592535   29024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:08:10.592849   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 13:08:10.592873   29024 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 13:08:10.592935   29024 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-213000"
	I0103 13:08:10.592954   29024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-213000"
	I0103 13:08:10.592953   29024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-213000"
	I0103 13:08:10.592970   29024 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-213000"
	I0103 13:08:10.592973   29024 addons.go:237] Setting addon dashboard=true in "default-k8s-diff-port-213000"
	W0103 13:08:10.592978   29024 addons.go:246] addon metrics-server should already be in state true
	I0103 13:08:10.592980   29024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-213000"
	I0103 13:08:10.592941   29024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-213000"
	I0103 13:08:10.592998   29024 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-213000"
	W0103 13:08:10.593005   29024 addons.go:246] addon storage-provisioner should already be in state true
	W0103 13:08:10.592983   29024 addons.go:246] addon dashboard should already be in state true
	I0103 13:08:10.593016   29024 config.go:182] Loaded profile config "default-k8s-diff-port-213000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 13:08:10.593025   29024 host.go:66] Checking if "default-k8s-diff-port-213000" exists ...
	I0103 13:08:10.593037   29024 host.go:66] Checking if "default-k8s-diff-port-213000" exists ...
	I0103 13:08:10.593044   29024 host.go:66] Checking if "default-k8s-diff-port-213000" exists ...
	I0103 13:08:10.593315   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:08:10.593473   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:08:10.593476   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:08:10.593504   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:08:10.669376   29024 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-213000"
	W0103 13:08:10.669407   29024 addons.go:246] addon default-storageclass should already be in state true
	I0103 13:08:10.669432   29024 host.go:66] Checking if "default-k8s-diff-port-213000" exists ...
	I0103 13:08:10.670084   29024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-213000 --format={{.State.Status}}
	I0103 13:08:10.715321   29024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 13:08:10.752852   29024 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 13:08:10.774518   29024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 13:08:10.811370   29024 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0103 13:08:10.811425   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 13:08:10.848599   29024 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 13:08:10.850303   29024 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 13:08:10.886306   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 13:08:10.886330   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 13:08:10.923588   29024 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0103 13:08:10.886429   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:08:10.886446   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:08:10.886457   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:08:10.945437   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0103 13:08:10.945449   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0103 13:08:10.945515   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:08:10.955424   29024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 13:08:11.030609   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:08:11.030624   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:08:11.030612   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:08:11.030649   29024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62171 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/default-k8s-diff-port-213000/id_rsa Username:docker}
	I0103 13:08:11.160620   29024 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-213000" context rescaled to 1 replicas
	I0103 13:08:11.160661   29024 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 13:08:11.183394   29024 out.go:177] * Verifying Kubernetes components...
	I0103 13:08:11.224285   29024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 13:08:11.359589   29024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 13:08:11.363835   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0103 13:08:11.363853   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0103 13:08:11.364553   29024 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 13:08:11.364564   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 13:08:11.372537   29024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 13:08:11.464321   29024 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 13:08:11.464341   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 13:08:11.468948   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0103 13:08:11.468964   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0103 13:08:11.567252   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0103 13:08:11.567267   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0103 13:08:11.567774   29024 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 13:08:11.567786   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 13:08:11.666836   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0103 13:08:11.666853   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0103 13:08:11.669350   29024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 13:08:11.773508   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0103 13:08:11.773530   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0103 13:08:11.863013   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0103 13:08:11.863031   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0103 13:08:11.975753   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0103 13:08:11.975775   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0103 13:08:12.073100   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0103 13:08:12.073125   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0103 13:08:12.094431   29024 addons.go:429] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0103 13:08:12.094450   29024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0103 13:08:12.175710   29024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0103 13:08:12.863731   29024 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.908214264s)
	I0103 13:08:12.863765   29024 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0103 13:08:12.863812   29024 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.639415564s)
	I0103 13:08:12.863832   29024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.504180613s)
	I0103 13:08:12.863961   29024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-213000
	I0103 13:08:12.922757   29024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-213000" to be "Ready" ...
	I0103 13:08:12.961185   29024 node_ready.go:49] node "default-k8s-diff-port-213000" has status "Ready":"True"
	I0103 13:08:12.961214   29024 node_ready.go:38] duration metric: took 38.412985ms waiting for node "default-k8s-diff-port-213000" to be "Ready" ...
	I0103 13:08:12.961239   29024 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 13:08:12.968863   29024 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zsk25" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:12.976179   29024 pod_ready.go:92] pod "coredns-5dd5756b68-zsk25" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:12.976193   29024 pod_ready.go:81] duration metric: took 7.306508ms waiting for pod "coredns-5dd5756b68-zsk25" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:12.976201   29024 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.060714   29024 pod_ready.go:92] pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:13.060730   29024 pod_ready.go:81] duration metric: took 84.52155ms waiting for pod "etcd-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.060741   29024 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.068552   29024 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:13.068576   29024 pod_ready.go:81] duration metric: took 7.824094ms waiting for pod "kube-apiserver-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.068588   29024 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.076344   29024 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:13.076360   29024 pod_ready.go:81] duration metric: took 7.76105ms waiting for pod "kube-controller-manager-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.076368   29024 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xhbnc" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.363055   29024 pod_ready.go:92] pod "kube-proxy-xhbnc" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:13.363079   29024 pod_ready.go:81] duration metric: took 286.698142ms waiting for pod "kube-proxy-xhbnc" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.363091   29024 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.465171   29024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.092541327s)
	I0103 13:08:13.573300   29024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.90385713s)
	I0103 13:08:13.573334   29024 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-213000"
	I0103 13:08:13.760119   29024 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace has status "Ready":"True"
	I0103 13:08:13.760135   29024 pod_ready.go:81] duration metric: took 397.026952ms waiting for pod "kube-scheduler-default-k8s-diff-port-213000" in "kube-system" namespace to be "Ready" ...
	I0103 13:08:13.760153   29024 pod_ready.go:38] duration metric: took 798.874456ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 13:08:13.760176   29024 api_server.go:52] waiting for apiserver process to appear ...
	I0103 13:08:13.760252   29024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:08:14.195601   29024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.019799524s)
	I0103 13:08:14.195616   29024 api_server.go:72] duration metric: took 3.034847614s to wait for apiserver process to appear ...
	I0103 13:08:14.195627   29024 api_server.go:88] waiting for apiserver healthz status ...
	I0103 13:08:14.195642   29024 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62175/healthz ...
	I0103 13:08:14.219809   29024 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-213000 addons enable metrics-server	
	
	
	I0103 13:08:14.261703   29024 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0103 13:08:14.319543   29024 addons.go:508] enable addons completed in 3.726576845s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0103 13:08:14.262609   29024 api_server.go:279] https://127.0.0.1:62175/healthz returned 200:
	ok
	I0103 13:08:14.321476   29024 api_server.go:141] control plane version: v1.28.4
	I0103 13:08:14.321488   29024 api_server.go:131] duration metric: took 125.852896ms to wait for apiserver health ...
	I0103 13:08:14.321494   29024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 13:08:14.328196   29024 system_pods.go:59] 8 kube-system pods found
	I0103 13:08:14.328212   29024 system_pods.go:61] "coredns-5dd5756b68-zsk25" [e3cf0c20-cf73-49e2-be9a-78c74079ec47] Running
	I0103 13:08:14.328216   29024 system_pods.go:61] "etcd-default-k8s-diff-port-213000" [dc105291-2070-4381-854c-7b57b3cacf07] Running
	I0103 13:08:14.328222   29024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-213000" [5042b2c6-cff5-4cf7-8bfa-fc04769902d4] Running
	I0103 13:08:14.328225   29024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-213000" [e901899b-cee3-43cb-b98c-43f708d1ec1c] Running
	I0103 13:08:14.328229   29024 system_pods.go:61] "kube-proxy-xhbnc" [8981f44f-2ed3-4588-82a9-6db6e8b94f18] Running
	I0103 13:08:14.328234   29024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-213000" [36648cf3-8f76-46b3-b5ca-e5f914c24c7b] Running
	I0103 13:08:14.328241   29024 system_pods.go:61] "metrics-server-57f55c9bc5-wlbqr" [478989e7-8ab1-4e0a-82c1-c0e4925d768c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 13:08:14.328247   29024 system_pods.go:61] "storage-provisioner" [487d0f57-a867-43db-bad2-edc2d9ea3910] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 13:08:14.328255   29024 system_pods.go:74] duration metric: took 6.754577ms to wait for pod list to return data ...
	I0103 13:08:14.328262   29024 default_sa.go:34] waiting for default service account to be created ...
	I0103 13:08:14.331611   29024 default_sa.go:45] found service account: "default"
	I0103 13:08:14.331627   29024 default_sa.go:55] duration metric: took 3.35971ms for default service account to be created ...
	I0103 13:08:14.331636   29024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 13:08:14.337988   29024 system_pods.go:86] 8 kube-system pods found
	I0103 13:08:14.338001   29024 system_pods.go:89] "coredns-5dd5756b68-zsk25" [e3cf0c20-cf73-49e2-be9a-78c74079ec47] Running
	I0103 13:08:14.338006   29024 system_pods.go:89] "etcd-default-k8s-diff-port-213000" [dc105291-2070-4381-854c-7b57b3cacf07] Running
	I0103 13:08:14.338010   29024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-213000" [5042b2c6-cff5-4cf7-8bfa-fc04769902d4] Running
	I0103 13:08:14.338015   29024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-213000" [e901899b-cee3-43cb-b98c-43f708d1ec1c] Running
	I0103 13:08:14.338018   29024 system_pods.go:89] "kube-proxy-xhbnc" [8981f44f-2ed3-4588-82a9-6db6e8b94f18] Running
	I0103 13:08:14.338022   29024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-213000" [36648cf3-8f76-46b3-b5ca-e5f914c24c7b] Running
	I0103 13:08:14.338029   29024 system_pods.go:89] "metrics-server-57f55c9bc5-wlbqr" [478989e7-8ab1-4e0a-82c1-c0e4925d768c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 13:08:14.338035   29024 system_pods.go:89] "storage-provisioner" [487d0f57-a867-43db-bad2-edc2d9ea3910] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 13:08:14.338043   29024 system_pods.go:126] duration metric: took 6.399142ms to wait for k8s-apps to be running ...
	I0103 13:08:14.338051   29024 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 13:08:14.338117   29024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 13:08:14.351074   29024 system_svc.go:56] duration metric: took 13.015683ms WaitForService to wait for kubelet.
	I0103 13:08:14.351095   29024 kubeadm.go:581] duration metric: took 3.190324457s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 13:08:14.351114   29024 node_conditions.go:102] verifying NodePressure condition ...
	I0103 13:08:14.560564   29024 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 13:08:14.560578   29024 node_conditions.go:123] node cpu capacity is 12
	I0103 13:08:14.560588   29024 node_conditions.go:105] duration metric: took 209.464409ms to run NodePressure ...
	I0103 13:08:14.560596   29024 start.go:228] waiting for startup goroutines ...
	I0103 13:08:14.560604   29024 start.go:233] waiting for cluster config update ...
	I0103 13:08:14.560614   29024 start.go:242] writing updated cluster config ...
	I0103 13:08:14.560936   29024 ssh_runner.go:195] Run: rm -f paused
	I0103 13:08:14.603859   29024 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I0103 13:08:14.625656   29024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-213000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.494961645Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.530984627Z" level=info msg="Loading containers: done."
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538868695Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538925879Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565689331Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565733584Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:13 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.768677948Z" level=info msg="Processing signal 'terminated'"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769687840Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769897997Z" level=info msg="Daemon shutdown complete"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.770248509Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: docker.service: Deactivated successfully.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Starting Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.823082271Z" level=info msg="Starting up"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.858304014Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.974500460Z" level=info msg="Loading containers: start."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.058816401Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.095279502Z" level=info msg="Loading containers: done."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103107473Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103167170Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129628965Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129721749Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:21 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-01-03T21:08:33Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[Jan 3 20:28] hrtimer: interrupt took 2402524 ns
	
	
	==> kernel <==
	 21:08:33 up  2:06,  0 users,  load average: 0.73, 0.63, 0.83
	Linux old-k8s-version-079000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 03 21:08:31 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 21:08:32 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 864.
	Jan 03 21:08:32 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 21:08:32 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: I0103 21:08:32.565218   32902 server.go:410] Version: v1.16.0
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: I0103 21:08:32.565400   32902 plugins.go:100] No cloud provider specified.
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: I0103 21:08:32.565408   32902 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: I0103 21:08:32.567106   32902 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: W0103 21:08:32.567811   32902 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: W0103 21:08:32.567871   32902 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 21:08:32 old-k8s-version-079000 kubelet[32902]: F0103 21:08:32.567896   32902 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 21:08:32 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 21:08:32 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 21:08:33 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 865.
	Jan 03 21:08:33 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 21:08:33 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: I0103 21:08:33.326171   32983 server.go:410] Version: v1.16.0
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: I0103 21:08:33.326428   32983 plugins.go:100] No cloud provider specified.
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: I0103 21:08:33.326440   32983 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: I0103 21:08:33.328390   32983 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: W0103 21:08:33.329048   32983 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: W0103 21:08:33.329131   32983 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 21:08:33 old-k8s-version-079000 kubelet[32983]: F0103 21:08:33.329154   32983 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 21:08:33 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 21:08:33 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 13:08:33.568320   29243 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (380.518185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (379.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:09:02.137028   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:09:08.936274   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:09:39.668133   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 13:09:43.190457   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:10:16.477439   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:11:03.033356   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:11:21.715241   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:11:49.191062   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:12:17.264826   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 13:12:19.070749   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:12:41.416710   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.421862   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.434072   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.454366   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.494692   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.575552   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:41.735930   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:42.058119   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:42.700459   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:43.981211   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:12:46.542769   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
E0103 13:12:51.664221   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:12:56.913290   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 13:13:01.904627   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:13:12.649832   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:13:22.387434   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:13:26.618935   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:14:02.144659   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 13:14:03.348801   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/default-k8s-diff-port-213000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:14:08.945715   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:14:39.674706   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 13:14:43.199306   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61669/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0103 13:14:49.663492   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (406.035491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-079000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-079000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.637µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-079000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-079000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-079000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3",
	        "Created": "2024-01-03T20:44:55.833825695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330830,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:51:07.721836081Z",
	            "FinishedAt": "2024-01-03T20:51:04.945962022Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/hosts",
	        "LogPath": "/var/lib/docker/containers/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3/488c5550224f7171ef208cbcc14e72b8e9b660b173df8ac004b157f8873115c3-json.log",
	        "Name": "/old-k8s-version-079000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-079000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-079000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a-init/diff:/var/lib/docker/overlay2/d51c25870073ca49ae45bcaffff5d04b6853b273710b15cd26d3414e5d7cfab6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41397c3f2359361b0e0bf92adf74fbe1a4b9037b18553c584c4e396602c1392a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-079000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-079000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-079000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-079000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22690e506998a488020031708015bc4c616d9aded4ec18ee021cebb06f55f6c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/22690e506998",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-079000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "488c5550224f",
	                        "old-k8s-version-079000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "fa57a59237dbd216e3611a46ef90c42978dc8b8c11f6ffc7c61970c426e7ce95",
	                    "EndpointID": "b9f1eeb15eb3bcf34443d22df5a9f0f604e4242a88fe4cc278cbd366a5c2f69a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
E0103 13:14:52.242524   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (380.48813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-079000 logs -n 25: (1.465176347s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	| delete  | -p                                                     | disable-driver-mounts-174000 | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:01 PST |
	|         | disable-driver-mounts-174000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:01 PST | 03 Jan 24 13:02 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-213000  | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:02 PST | 03 Jan 24 13:02 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:02 PST | 03 Jan 24 13:03 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-213000       | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:03 PST | 03 Jan 24 13:03 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:03 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-213000                           | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:08 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-213000 | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:08 PST |
	|         | default-k8s-diff-port-213000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-298000 --memory=2200 --alsologtostderr   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:08 PST | 03 Jan 24 13:09 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-298000             | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-298000                                   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-298000                  | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-298000 --memory=2200 --alsologtostderr   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-298000 image list                           | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-298000                                   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:09 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-298000                                   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:09 PST | 03 Jan 24 13:10 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-298000                                   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:10 PST | 03 Jan 24 13:10 PST |
	| delete  | -p newest-cni-298000                                   | newest-cni-298000            | jenkins | v1.32.0 | 03 Jan 24 13:10 PST | 03 Jan 24 13:10 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 13:09:29
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 13:09:29.569788   29541 out.go:296] Setting OutFile to fd 1 ...
	I0103 13:09:29.570035   29541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 13:09:29.570041   29541 out.go:309] Setting ErrFile to fd 2...
	I0103 13:09:29.570045   29541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 13:09:29.570231   29541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 13:09:29.571695   29541 out.go:303] Setting JSON to false
	I0103 13:09:29.595125   29541 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9539,"bootTime":1704306630,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 13:09:29.595223   29541 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 13:09:29.616804   29541 out.go:177] * [newest-cni-298000] minikube v1.32.0 on Darwin 14.2
	I0103 13:09:29.659867   29541 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 13:09:29.659959   29541 notify.go:220] Checking for updates...
	I0103 13:09:29.702651   29541 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:09:29.744897   29541 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 13:09:29.786667   29541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 13:09:29.807750   29541 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 13:09:29.828797   29541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 13:09:29.850198   29541 config.go:182] Loaded profile config "newest-cni-298000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 13:09:29.850967   29541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 13:09:29.908433   29541 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 13:09:29.908599   29541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 13:09:30.011187   29541 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 21:09:30.000509589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 13:09:30.032659   29541 out.go:177] * Using the docker driver based on existing profile
	I0103 13:09:30.074348   29541 start.go:298] selected driver: docker
	I0103 13:09:30.074380   29541 start.go:902] validating driver "docker" against &{Name:newest-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-298000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:09:30.074497   29541 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 13:09:30.078480   29541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 13:09:30.179124   29541 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 21:09:30.168508186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 13:09:30.179358   29541 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0103 13:09:30.179425   29541 cni.go:84] Creating CNI manager for ""
	I0103 13:09:30.179439   29541 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:09:30.179449   29541 start_flags.go:323] config:
	{Name:newest-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-298000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:09:30.221808   29541 out.go:177] * Starting control plane node newest-cni-298000 in cluster newest-cni-298000
	I0103 13:09:30.244490   29541 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 13:09:30.265645   29541 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 13:09:30.307424   29541 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 13:09:30.307451   29541 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 13:09:30.307486   29541 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0103 13:09:30.307504   29541 cache.go:56] Caching tarball of preloaded images
	I0103 13:09:30.307619   29541 preload.go:174] Found /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0103 13:09:30.307630   29541 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0103 13:09:30.308071   29541 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/config.json ...
	I0103 13:09:30.359322   29541 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 13:09:30.359344   29541 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 13:09:30.359366   29541 cache.go:194] Successfully downloaded all kic artifacts
	I0103 13:09:30.359422   29541 start.go:365] acquiring machines lock for newest-cni-298000: {Name:mk9aa456ab3f295a70f88b30f336db277ae81bdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 13:09:30.359501   29541 start.go:369] acquired machines lock for "newest-cni-298000" in 57.706µs
	I0103 13:09:30.359525   29541 start.go:96] Skipping create...Using existing machine configuration
	I0103 13:09:30.359533   29541 fix.go:54] fixHost starting: 
	I0103 13:09:30.359761   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:30.411367   29541 fix.go:102] recreateIfNeeded on newest-cni-298000: state=Stopped err=<nil>
	W0103 13:09:30.411419   29541 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 13:09:30.433055   29541 out.go:177] * Restarting existing docker container for "newest-cni-298000" ...
	I0103 13:09:30.474690   29541 cli_runner.go:164] Run: docker start newest-cni-298000
	I0103 13:09:30.717363   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:30.772380   29541 kic.go:430] container "newest-cni-298000" state is running.
	I0103 13:09:30.772993   29541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-298000
	I0103 13:09:30.829752   29541 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/config.json ...
	I0103 13:09:30.830234   29541 machine.go:88] provisioning docker machine ...
	I0103 13:09:30.830258   29541 ubuntu.go:169] provisioning hostname "newest-cni-298000"
	I0103 13:09:30.830328   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:30.890352   29541 main.go:141] libmachine: Using SSH client type: native
	I0103 13:09:30.890697   29541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62716 <nil> <nil>}
	I0103 13:09:30.890710   29541 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-298000 && echo "newest-cni-298000" | sudo tee /etc/hostname
	I0103 13:09:30.891721   29541 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0103 13:09:34.023666   29541 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-298000
	
	I0103 13:09:34.023768   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:34.078022   29541 main.go:141] libmachine: Using SSH client type: native
	I0103 13:09:34.078302   29541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62716 <nil> <nil>}
	I0103 13:09:34.078317   29541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-298000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-298000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-298000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 13:09:34.197310   29541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 13:09:34.197331   29541 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
	I0103 13:09:34.197350   29541 ubuntu.go:177] setting up certificates
	I0103 13:09:34.197358   29541 provision.go:83] configureAuth start
	I0103 13:09:34.197429   29541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-298000
	I0103 13:09:34.249207   29541 provision.go:138] copyHostCerts
	I0103 13:09:34.249312   29541 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
	I0103 13:09:34.249321   29541 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
	I0103 13:09:34.249444   29541 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
	I0103 13:09:34.249682   29541 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
	I0103 13:09:34.249688   29541 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
	I0103 13:09:34.249751   29541 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
	I0103 13:09:34.249923   29541 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
	I0103 13:09:34.249929   29541 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
	I0103 13:09:34.249989   29541 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
	I0103 13:09:34.250144   29541 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.newest-cni-298000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-298000]
	I0103 13:09:34.312031   29541 provision.go:172] copyRemoteCerts
	I0103 13:09:34.312098   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 13:09:34.312157   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:34.364180   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:34.452149   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 13:09:34.471856   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 13:09:34.492306   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0103 13:09:34.512650   29541 provision.go:86] duration metric: configureAuth took 315.27129ms
	I0103 13:09:34.512664   29541 ubuntu.go:193] setting minikube options for container-runtime
	I0103 13:09:34.512813   29541 config.go:182] Loaded profile config "newest-cni-298000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 13:09:34.512878   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:34.565150   29541 main.go:141] libmachine: Using SSH client type: native
	I0103 13:09:34.565449   29541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62716 <nil> <nil>}
	I0103 13:09:34.565460   29541 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0103 13:09:34.683438   29541 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0103 13:09:34.683455   29541 ubuntu.go:71] root file system type: overlay
	I0103 13:09:34.683553   29541 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0103 13:09:34.683649   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:34.736511   29541 main.go:141] libmachine: Using SSH client type: native
	I0103 13:09:34.736803   29541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62716 <nil> <nil>}
	I0103 13:09:34.736855   29541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0103 13:09:34.866960   29541 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0103 13:09:34.867078   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:34.919258   29541 main.go:141] libmachine: Using SSH client type: native
	I0103 13:09:34.919551   29541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 62716 <nil> <nil>}
	I0103 13:09:34.919564   29541 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0103 13:09:35.042349   29541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 13:09:35.042370   29541 machine.go:91] provisioned docker machine in 4.212017623s
	I0103 13:09:35.042377   29541 start.go:300] post-start starting for "newest-cni-298000" (driver="docker")
	I0103 13:09:35.042392   29541 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 13:09:35.042465   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 13:09:35.042525   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:35.094775   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:35.183209   29541 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 13:09:35.187042   29541 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 13:09:35.187072   29541 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 13:09:35.187081   29541 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 13:09:35.187086   29541 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 13:09:35.187097   29541 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
	I0103 13:09:35.187182   29541 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
	I0103 13:09:35.187322   29541 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
	I0103 13:09:35.187471   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 13:09:35.195543   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
	I0103 13:09:35.216273   29541 start.go:303] post-start completed in 173.877204ms
	I0103 13:09:35.216360   29541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 13:09:35.216424   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:35.267847   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:35.353189   29541 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 13:09:35.357996   29541 fix.go:56] fixHost completed within 4.998333091s
	I0103 13:09:35.358026   29541 start.go:83] releasing machines lock for "newest-cni-298000", held for 4.998386728s
	I0103 13:09:35.358119   29541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-298000
	I0103 13:09:35.409155   29541 ssh_runner.go:195] Run: cat /version.json
	I0103 13:09:35.409177   29541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 13:09:35.409250   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:35.409256   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:35.462297   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:35.462322   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:35.651297   29541 ssh_runner.go:195] Run: systemctl --version
	I0103 13:09:35.656232   29541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 13:09:35.661057   29541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0103 13:09:35.677555   29541 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0103 13:09:35.677634   29541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 13:09:35.685914   29541 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 13:09:35.685935   29541 start.go:475] detecting cgroup driver to use...
	I0103 13:09:35.685959   29541 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 13:09:35.686078   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 13:09:35.700522   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0103 13:09:35.709910   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0103 13:09:35.719261   29541 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0103 13:09:35.719335   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0103 13:09:35.728890   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 13:09:35.738334   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0103 13:09:35.747766   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0103 13:09:35.757078   29541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 13:09:35.765860   29541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0103 13:09:35.775031   29541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 13:09:35.782985   29541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 13:09:35.791072   29541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:09:35.844005   29541 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0103 13:09:35.924117   29541 start.go:475] detecting cgroup driver to use...
	I0103 13:09:35.924143   29541 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 13:09:35.924258   29541 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0103 13:09:35.936640   29541 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0103 13:09:35.936709   29541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0103 13:09:35.948397   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 13:09:35.964466   29541 ssh_runner.go:195] Run: which cri-dockerd
	I0103 13:09:35.968888   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0103 13:09:35.978390   29541 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0103 13:09:36.010405   29541 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0103 13:09:36.123947   29541 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0103 13:09:36.212757   29541 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0103 13:09:36.212846   29541 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0103 13:09:36.229011   29541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:09:36.307506   29541 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0103 13:09:36.570318   29541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 13:09:36.626836   29541 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0103 13:09:36.683459   29541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0103 13:09:36.737804   29541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:09:36.791555   29541 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0103 13:09:36.821308   29541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 13:09:36.873782   29541 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0103 13:09:36.949453   29541 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0103 13:09:36.949589   29541 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0103 13:09:36.955128   29541 start.go:543] Will wait 60s for crictl version
	I0103 13:09:36.955196   29541 ssh_runner.go:195] Run: which crictl
	I0103 13:09:36.959993   29541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 13:09:37.004910   29541 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0103 13:09:37.005003   29541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 13:09:37.031124   29541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0103 13:09:37.080304   29541 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0103 13:09:37.080395   29541 cli_runner.go:164] Run: docker exec -t newest-cni-298000 dig +short host.docker.internal
	I0103 13:09:37.199110   29541 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0103 13:09:37.199222   29541 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0103 13:09:37.203804   29541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 13:09:37.214564   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:37.287847   29541 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0103 13:09:37.309896   29541 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 13:09:37.310071   29541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 13:09:37.330477   29541 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0103 13:09:37.330499   29541 docker.go:601] Images already preloaded, skipping extraction
	I0103 13:09:37.330569   29541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0103 13:09:37.351358   29541 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0103 13:09:37.351376   29541 cache_images.go:84] Images are preloaded, skipping loading
	I0103 13:09:37.351473   29541 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0103 13:09:37.398891   29541 cni.go:84] Creating CNI manager for ""
	I0103 13:09:37.398908   29541 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:09:37.398931   29541 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0103 13:09:37.398955   29541 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-298000 NodeName:newest-cni-298000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 13:09:37.399107   29541 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-298000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 13:09:37.399197   29541 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-298000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-298000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 13:09:37.399261   29541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0103 13:09:37.407806   29541 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 13:09:37.407861   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 13:09:37.415889   29541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0103 13:09:37.431263   29541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0103 13:09:37.446633   29541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0103 13:09:37.462080   29541 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 13:09:37.466231   29541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 13:09:37.476667   29541 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000 for IP: 192.168.67.2
	I0103 13:09:37.476688   29541 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:09:37.476868   29541 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
	I0103 13:09:37.476921   29541 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
	I0103 13:09:37.477015   29541 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/client.key
	I0103 13:09:37.477075   29541 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/apiserver.key.c7fa3a9e
	I0103 13:09:37.477130   29541 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/proxy-client.key
	I0103 13:09:37.477323   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
	W0103 13:09:37.477359   29541 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
	I0103 13:09:37.477372   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 13:09:37.477404   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
	I0103 13:09:37.477437   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
	I0103 13:09:37.477465   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
	I0103 13:09:37.477535   29541 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
	I0103 13:09:37.478087   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 13:09:37.498607   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 13:09:37.518919   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 13:09:37.539757   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/newest-cni-298000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 13:09:37.560422   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 13:09:37.580870   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 13:09:37.601188   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 13:09:37.621886   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 13:09:37.642563   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
	I0103 13:09:37.662767   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
	I0103 13:09:37.683252   29541 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 13:09:37.703907   29541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 13:09:37.719543   29541 ssh_runner.go:195] Run: openssl version
	I0103 13:09:37.724944   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 13:09:37.733964   29541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:09:37.738376   29541 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:09:37.738445   29541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 13:09:37.745537   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 13:09:37.755059   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
	I0103 13:09:37.764744   29541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
	I0103 13:09:37.769171   29541 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:57 /usr/share/ca-certificates/11090.pem
	I0103 13:09:37.769227   29541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
	I0103 13:09:37.776499   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
	I0103 13:09:37.785360   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
	I0103 13:09:37.795518   29541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
	I0103 13:09:37.800214   29541 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:57 /usr/share/ca-certificates/110902.pem
	I0103 13:09:37.800273   29541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
	I0103 13:09:37.807286   29541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 13:09:37.815869   29541 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 13:09:37.820123   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 13:09:37.826782   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 13:09:37.833362   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 13:09:37.839640   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 13:09:37.845970   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 13:09:37.852461   29541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 13:09:37.858752   29541 kubeadm.go:404] StartCluster: {Name:newest-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-298000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 13:09:37.858868   29541 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 13:09:37.877182   29541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 13:09:37.885749   29541 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 13:09:37.885767   29541 kubeadm.go:636] restartCluster start
	I0103 13:09:37.885818   29541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 13:09:37.893628   29541 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:37.893709   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:37.946451   29541 kubeconfig.go:135] verify returned: extract IP: "newest-cni-298000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:09:37.946632   29541 kubeconfig.go:146] "newest-cni-298000" context is missing from /Users/jenkins/minikube-integration/17885-10646/kubeconfig - will repair!
	I0103 13:09:37.946936   29541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:09:37.948431   29541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 13:09:37.956928   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:37.956977   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:37.966222   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:38.459077   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:38.459265   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:38.470543   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:38.957335   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:38.957615   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:38.967898   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:39.457745   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:39.457926   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:39.468774   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:39.958431   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:39.958546   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:39.969801   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:40.457813   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:40.458003   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:40.469441   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:40.958618   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:40.958730   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:40.970294   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:41.458616   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:41.458698   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:41.468973   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:41.957282   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:41.957430   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:41.967890   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:42.457394   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:42.457552   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:42.468602   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:42.959191   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:42.959340   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:42.970559   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:43.459167   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:43.459332   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:43.470919   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:43.958218   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:43.958430   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:43.969775   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:44.457513   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:44.457630   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:44.469352   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:44.959172   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:44.959297   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:44.970158   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:45.458409   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:45.458546   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:45.469945   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:45.957400   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:45.957470   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:45.967719   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:46.457554   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:46.457746   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:46.468996   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:46.957361   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:46.957496   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:46.967776   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:47.457284   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:47.457396   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:47.468779   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:47.959312   29541 api_server.go:166] Checking apiserver status ...
	I0103 13:09:47.959485   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 13:09:47.970719   29541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:47.970735   29541 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 13:09:47.970751   29541 kubeadm.go:1135] stopping kube-system containers ...
	I0103 13:09:47.970817   29541 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0103 13:09:47.990355   29541 docker.go:469] Stopping containers: [ad76975806cc 515bf842b610 a116480dde70 a242a5d38a5b a096c8ea9876 00e0381678c5 1a983c4f5261 42ab5ad9e420 8b6d432d19af e4934da81df8 b785552e4612 8704109ab553 e09c6b063358 0f0682e12fb1 e6a39951f9a7]
	I0103 13:09:47.990438   29541 ssh_runner.go:195] Run: docker stop ad76975806cc 515bf842b610 a116480dde70 a242a5d38a5b a096c8ea9876 00e0381678c5 1a983c4f5261 42ab5ad9e420 8b6d432d19af e4934da81df8 b785552e4612 8704109ab553 e09c6b063358 0f0682e12fb1 e6a39951f9a7
	I0103 13:09:48.015481   29541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 13:09:48.027097   29541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 13:09:48.035809   29541 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Jan  3 21:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  3 21:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  3 21:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  3 21:08 /etc/kubernetes/scheduler.conf
	
	I0103 13:09:48.035870   29541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0103 13:09:48.044316   29541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0103 13:09:48.052653   29541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0103 13:09:48.061003   29541 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:48.061056   29541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0103 13:09:48.069035   29541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0103 13:09:48.077133   29541 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 13:09:48.077193   29541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0103 13:09:48.085110   29541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 13:09:48.093401   29541 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 13:09:48.093418   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:48.137937   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:49.095456   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:49.220968   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:49.270925   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:49.324795   29541 api_server.go:52] waiting for apiserver process to appear ...
	I0103 13:09:49.324868   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:09:49.825963   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:09:50.325164   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:09:50.408019   29541 api_server.go:72] duration metric: took 1.083193703s to wait for apiserver process to appear ...
	I0103 13:09:50.408041   29541 api_server.go:88] waiting for apiserver healthz status ...
	I0103 13:09:50.408068   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:50.409408   29541 api_server.go:269] stopped: https://127.0.0.1:62720/healthz: Get "https://127.0.0.1:62720/healthz": EOF
	I0103 13:09:50.908224   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:53.196137   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 13:09:53.196308   29541 api_server.go:103] status: https://127.0.0.1:62720/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 13:09:53.196324   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:53.205044   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:09:53.205067   29541 api_server.go:103] status: https://127.0.0.1:62720/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:09:53.408268   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:53.416935   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:09:53.416952   29541 api_server.go:103] status: https://127.0.0.1:62720/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:09:53.908322   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:53.914618   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:09:53.914652   29541 api_server.go:103] status: https://127.0.0.1:62720/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:09:54.408283   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:54.414993   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 13:09:54.415015   29541 api_server.go:103] status: https://127.0.0.1:62720/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 13:09:54.908524   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:54.915219   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 200:
	ok
	I0103 13:09:54.921942   29541 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 13:09:54.921956   29541 api_server.go:131] duration metric: took 4.513792049s to wait for apiserver health ...
	I0103 13:09:54.921963   29541 cni.go:84] Creating CNI manager for ""
	I0103 13:09:54.921974   29541 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 13:09:54.942653   29541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0103 13:09:54.966080   29541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0103 13:09:54.976178   29541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0103 13:09:54.991229   29541 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 13:09:54.999416   29541 system_pods.go:59] 8 kube-system pods found
	I0103 13:09:54.999434   29541 system_pods.go:61] "coredns-76f75df574-w9vs5" [8040ddb9-3171-41ce-813f-bbd6253fb3a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 13:09:54.999441   29541 system_pods.go:61] "etcd-newest-cni-298000" [8aff55ac-b560-454a-b9ba-0944cbfeb23e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 13:09:54.999450   29541 system_pods.go:61] "kube-apiserver-newest-cni-298000" [98f399da-d901-4a15-9e26-0cd86c20aaf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 13:09:54.999456   29541 system_pods.go:61] "kube-controller-manager-newest-cni-298000" [1952c9fe-9020-48a1-be53-b3e5697a2876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 13:09:54.999465   29541 system_pods.go:61] "kube-proxy-kqkf7" [154eb4e9-f3f1-4c69-b9bc-3b50b386a0fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0103 13:09:54.999471   29541 system_pods.go:61] "kube-scheduler-newest-cni-298000" [7cbae678-437c-422b-a67d-afdd7f0e040e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 13:09:54.999485   29541 system_pods.go:61] "metrics-server-57f55c9bc5-xk4cv" [8acd404a-5fdb-4b48-8f97-e18aad670519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 13:09:54.999490   29541 system_pods.go:61] "storage-provisioner" [f5fedf05-623f-429d-9e14-f4ac5d0a2cb8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0103 13:09:54.999498   29541 system_pods.go:74] duration metric: took 8.256789ms to wait for pod list to return data ...
	I0103 13:09:54.999505   29541 node_conditions.go:102] verifying NodePressure condition ...
	I0103 13:09:55.002680   29541 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 13:09:55.002695   29541 node_conditions.go:123] node cpu capacity is 12
	I0103 13:09:55.002705   29541 node_conditions.go:105] duration metric: took 3.195548ms to run NodePressure ...
	I0103 13:09:55.002740   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 13:09:55.249826   29541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 13:09:55.261182   29541 ops.go:34] apiserver oom_adj: -16
	I0103 13:09:55.261212   29541 kubeadm.go:640] restartCluster took 17.374981938s
	I0103 13:09:55.261226   29541 kubeadm.go:406] StartCluster complete in 17.402032267s
	I0103 13:09:55.261245   29541 settings.go:142] acquiring lock: {Name:mk777823310df39752595be0f41f425a2c8eb047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:09:55.261367   29541 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 13:09:55.262193   29541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/kubeconfig: {Name:mk61966fd03b327572b428e807810fbe63a7e94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 13:09:55.262512   29541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 13:09:55.262525   29541 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 13:09:55.262593   29541 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-298000"
	I0103 13:09:55.262629   29541 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-298000"
	W0103 13:09:55.262637   29541 addons.go:246] addon storage-provisioner should already be in state true
	I0103 13:09:55.262680   29541 addons.go:69] Setting default-storageclass=true in profile "newest-cni-298000"
	I0103 13:09:55.262684   29541 host.go:66] Checking if "newest-cni-298000" exists ...
	I0103 13:09:55.262694   29541 addons.go:69] Setting dashboard=true in profile "newest-cni-298000"
	I0103 13:09:55.262729   29541 addons.go:237] Setting addon dashboard=true in "newest-cni-298000"
	W0103 13:09:55.262742   29541 addons.go:246] addon dashboard should already be in state true
	I0103 13:09:55.262742   29541 addons.go:69] Setting metrics-server=true in profile "newest-cni-298000"
	I0103 13:09:55.262756   29541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-298000"
	I0103 13:09:55.262763   29541 config.go:182] Loaded profile config "newest-cni-298000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0103 13:09:55.262776   29541 addons.go:237] Setting addon metrics-server=true in "newest-cni-298000"
	I0103 13:09:55.262792   29541 host.go:66] Checking if "newest-cni-298000" exists ...
	W0103 13:09:55.262887   29541 addons.go:246] addon metrics-server should already be in state true
	I0103 13:09:55.262920   29541 host.go:66] Checking if "newest-cni-298000" exists ...
	I0103 13:09:55.263166   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:55.263175   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:55.264200   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:55.264477   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:55.272922   29541 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-298000" context rescaled to 1 replicas
	I0103 13:09:55.273019   29541 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0103 13:09:55.294666   29541 out.go:177] * Verifying Kubernetes components...
	I0103 13:09:55.368974   29541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 13:09:55.417124   29541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0103 13:09:55.437980   29541 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0103 13:09:55.380514   29541 addons.go:237] Setting addon default-storageclass=true in "newest-cni-298000"
	I0103 13:09:55.423041   29541 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 13:09:55.423054   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-298000
	W0103 13:09:55.459166   29541 addons.go:246] addon default-storageclass should already be in state true
	I0103 13:09:55.459168   29541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 13:09:55.480290   29541 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 13:09:55.500945   29541 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0103 13:09:55.500971   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 13:09:55.501025   29541 host.go:66] Checking if "newest-cni-298000" exists ...
	I0103 13:09:55.522233   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0103 13:09:55.522268   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:55.543147   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0103 13:09:55.543231   29541 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 13:09:55.543245   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 13:09:55.543326   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:55.543359   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:55.544819   29541 cli_runner.go:164] Run: docker container inspect newest-cni-298000 --format={{.State.Status}}
	I0103 13:09:55.552161   29541 api_server.go:52] waiting for apiserver process to appear ...
	I0103 13:09:55.552440   29541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 13:09:55.565599   29541 api_server.go:72] duration metric: took 292.52081ms to wait for apiserver process to appear ...
	I0103 13:09:55.565625   29541 api_server.go:88] waiting for apiserver healthz status ...
	I0103 13:09:55.565689   29541 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62720/healthz ...
	I0103 13:09:55.574322   29541 api_server.go:279] https://127.0.0.1:62720/healthz returned 200:
	ok
	I0103 13:09:55.576921   29541 api_server.go:141] control plane version: v1.29.0-rc.2
	I0103 13:09:55.576943   29541 api_server.go:131] duration metric: took 11.310654ms to wait for apiserver health ...
	I0103 13:09:55.576952   29541 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 13:09:55.586207   29541 system_pods.go:59] 8 kube-system pods found
	I0103 13:09:55.586238   29541 system_pods.go:61] "coredns-76f75df574-w9vs5" [8040ddb9-3171-41ce-813f-bbd6253fb3a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0103 13:09:55.586253   29541 system_pods.go:61] "etcd-newest-cni-298000" [8aff55ac-b560-454a-b9ba-0944cbfeb23e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 13:09:55.586276   29541 system_pods.go:61] "kube-apiserver-newest-cni-298000" [98f399da-d901-4a15-9e26-0cd86c20aaf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 13:09:55.586285   29541 system_pods.go:61] "kube-controller-manager-newest-cni-298000" [1952c9fe-9020-48a1-be53-b3e5697a2876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0103 13:09:55.586296   29541 system_pods.go:61] "kube-proxy-kqkf7" [154eb4e9-f3f1-4c69-b9bc-3b50b386a0fe] Running
	I0103 13:09:55.586303   29541 system_pods.go:61] "kube-scheduler-newest-cni-298000" [7cbae678-437c-422b-a67d-afdd7f0e040e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 13:09:55.586315   29541 system_pods.go:61] "metrics-server-57f55c9bc5-xk4cv" [8acd404a-5fdb-4b48-8f97-e18aad670519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0103 13:09:55.586325   29541 system_pods.go:61] "storage-provisioner" [f5fedf05-623f-429d-9e14-f4ac5d0a2cb8] Running
	I0103 13:09:55.586335   29541 system_pods.go:74] duration metric: took 9.376325ms to wait for pod list to return data ...
	I0103 13:09:55.586343   29541 default_sa.go:34] waiting for default service account to be created ...
	I0103 13:09:55.590470   29541 default_sa.go:45] found service account: "default"
	I0103 13:09:55.590494   29541 default_sa.go:55] duration metric: took 4.143213ms for default service account to be created ...
	I0103 13:09:55.590511   29541 kubeadm.go:581] duration metric: took 317.435855ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0103 13:09:55.590532   29541 node_conditions.go:102] verifying NodePressure condition ...
	I0103 13:09:55.594562   29541 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0103 13:09:55.594588   29541 node_conditions.go:123] node cpu capacity is 12
	I0103 13:09:55.594605   29541 node_conditions.go:105] duration metric: took 4.065972ms to run NodePressure ...
	I0103 13:09:55.594621   29541 start.go:228] waiting for startup goroutines ...
	I0103 13:09:55.617369   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:55.617531   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:55.617707   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:55.617789   29541 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 13:09:55.617799   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 13:09:55.617887   29541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-298000
	I0103 13:09:55.678277   29541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62716 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/newest-cni-298000/id_rsa Username:docker}
	I0103 13:09:55.717452   29541 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 13:09:55.717473   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0103 13:09:55.717452   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0103 13:09:55.717599   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0103 13:09:55.718010   29541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 13:09:55.735259   29541 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 13:09:55.735269   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0103 13:09:55.735279   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 13:09:55.735293   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0103 13:09:55.754919   29541 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 13:09:55.754925   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0103 13:09:55.754941   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0103 13:09:55.754947   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 13:09:55.801123   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0103 13:09:55.801143   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0103 13:09:55.808101   29541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 13:09:55.809083   29541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 13:09:55.823029   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0103 13:09:55.823057   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0103 13:09:55.901551   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0103 13:09:55.901571   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0103 13:09:55.921111   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0103 13:09:55.921132   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0103 13:09:56.007775   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0103 13:09:56.007790   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0103 13:09:56.027298   29541 addons.go:429] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0103 13:09:56.027312   29541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0103 13:09:56.109566   29541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0103 13:09:56.829376   29541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111321371s)
	I0103 13:09:56.942733   29541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.134573631s)
	I0103 13:09:56.942754   29541 addons.go:473] Verifying addon metrics-server=true in "newest-cni-298000"
	I0103 13:09:56.942774   29541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133635738s)
	I0103 13:09:57.129231   29541 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-298000 addons enable metrics-server	
	
	
	I0103 13:09:57.151187   29541 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0103 13:09:57.171936   29541 addons.go:508] enable addons completed in 1.909371454s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0103 13:09:57.172014   29541 start.go:233] waiting for cluster config update ...
	I0103 13:09:57.172035   29541 start.go:242] writing updated cluster config ...
	I0103 13:09:57.193461   29541 ssh_runner.go:195] Run: rm -f paused
	I0103 13:09:57.234828   29541 start.go:600] kubectl: 1.28.2, cluster: 1.29.0-rc.2 (minor skew: 1)
	I0103 13:09:57.256157   29541 out.go:177] * Done! kubectl is now configured to use "newest-cni-298000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.494961645Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.530984627Z" level=info msg="Loading containers: done."
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538868695Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.538925879Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565689331Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:13 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:13.565733584Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:13 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.768677948Z" level=info msg="Processing signal 'terminated'"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769687840Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.769897997Z" level=info msg="Daemon shutdown complete"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[718]: time="2024-01-03T20:51:20.770248509Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: docker.service: Deactivated successfully.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 03 20:51:20 old-k8s-version-079000 systemd[1]: Starting Docker Application Container Engine...
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.823082271Z" level=info msg="Starting up"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.858304014Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 03 20:51:20 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:20.974500460Z" level=info msg="Loading containers: start."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.058816401Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.095279502Z" level=info msg="Loading containers: done."
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103107473Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.103167170Z" level=info msg="Daemon has completed initialization"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129628965Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 03 20:51:21 old-k8s-version-079000 dockerd[945]: time="2024-01-03T20:51:21.129721749Z" level=info msg="API listen on [::]:2376"
	Jan 03 20:51:21 old-k8s-version-079000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-01-03T21:14:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	
	==> dmesg <==
	[Jan 3 20:28] hrtimer: interrupt took 2402524 ns
	
	
	==> kernel <==
	 21:14:53 up  2:12,  0 users,  load average: 0.10, 0.35, 0.65
	Linux old-k8s-version-079000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 03 21:14:51 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 21:14:52 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1366.
	Jan 03 21:14:52 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 21:14:52 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: I0103 21:14:52.829320   41992 server.go:410] Version: v1.16.0
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: I0103 21:14:52.829523   41992 plugins.go:100] No cloud provider specified.
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: I0103 21:14:52.829534   41992 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: I0103 21:14:52.831258   41992 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: W0103 21:14:52.832163   41992 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: W0103 21:14:52.832244   41992 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 21:14:52 old-k8s-version-079000 kubelet[41992]: F0103 21:14:52.832275   41992 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 21:14:52 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 21:14:52 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 03 21:14:53 old-k8s-version-079000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1367.
	Jan 03 21:14:53 old-k8s-version-079000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 03 21:14:53 old-k8s-version-079000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: I0103 21:14:53.563235   42120 server.go:410] Version: v1.16.0
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: I0103 21:14:53.563382   42120 plugins.go:100] No cloud provider specified.
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: I0103 21:14:53.563391   42120 server.go:773] Client rotation is on, will bootstrap in background
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: I0103 21:14:53.564926   42120 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: W0103 21:14:53.566668   42120 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: W0103 21:14:53.566743   42120 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 03 21:14:53 old-k8s-version-079000 kubelet[42120]: F0103 21:14:53.566772   42120 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 03 21:14:53 old-k8s-version-079000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 03 21:14:53 old-k8s-version-079000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 13:14:53.399377   29846 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 2 (394.833182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (379.89s)

                                                
                                    

Test pass (292/329)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.38
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.28.4/json-events 44.1
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.32
17 TestDownloadOnly/v1.29.0-rc.2/json-events 43.13
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.34
23 TestDownloadOnly/DeleteAll 0.65
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
25 TestDownloadOnlyKic 1.97
26 TestBinaryMirror 1.64
27 TestOffline 42.82
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
32 TestAddons/Setup 157.59
36 TestAddons/parallel/InspektorGadget 10.84
37 TestAddons/parallel/MetricsServer 6.8
38 TestAddons/parallel/HelmTiller 10.3
40 TestAddons/parallel/CSI 66.36
41 TestAddons/parallel/Headlamp 14.53
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 54.3
44 TestAddons/parallel/NvidiaDevicePlugin 5.62
45 TestAddons/parallel/Yakd 6.05
48 TestAddons/serial/GCPAuth/Namespaces 0.1
49 TestAddons/StoppedEnableDisable 11.77
50 TestCertOptions 24.22
51 TestCertExpiration 227
52 TestDockerFlags 25.67
53 TestForceSystemdFlag 26.56
54 TestForceSystemdEnv 26.69
57 TestHyperKitDriverInstallOrUpdate 12.44
60 TestErrorSpam/setup 21.8
61 TestErrorSpam/start 2.06
62 TestErrorSpam/status 1.19
63 TestErrorSpam/pause 1.65
64 TestErrorSpam/unpause 1.75
65 TestErrorSpam/stop 12.55
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 36.98
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 39.42
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
77 TestFunctional/serial/CacheCmd/cache/add_local 1.61
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
82 TestFunctional/serial/CacheCmd/cache/delete 0.17
83 TestFunctional/serial/MinikubeKubectlCmd 0.58
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.78
85 TestFunctional/serial/ExtraConfig 37.36
86 TestFunctional/serial/ComponentHealth 0.06
87 TestFunctional/serial/LogsCmd 3.08
88 TestFunctional/serial/LogsFileCmd 3.02
89 TestFunctional/serial/InvalidService 4.56
91 TestFunctional/parallel/ConfigCmd 0.52
92 TestFunctional/parallel/DashboardCmd 12.54
93 TestFunctional/parallel/DryRun 1.37
94 TestFunctional/parallel/InternationalLanguage 0.73
95 TestFunctional/parallel/StatusCmd 1.25
100 TestFunctional/parallel/AddonsCmd 0.27
101 TestFunctional/parallel/PersistentVolumeClaim 28.93
103 TestFunctional/parallel/SSHCmd 0.79
104 TestFunctional/parallel/CpCmd 2.58
105 TestFunctional/parallel/MySQL 32.8
106 TestFunctional/parallel/FileSync 0.54
107 TestFunctional/parallel/CertSync 3.02
111 TestFunctional/parallel/NodeLabels 0.07
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
115 TestFunctional/parallel/License 0.52
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.23
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
127 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
129 TestFunctional/parallel/ProfileCmd/profile_list 0.47
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
132 TestFunctional/parallel/ServiceCmd/List 0.6
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
134 TestFunctional/parallel/ServiceCmd/HTTPS 15
136 TestFunctional/parallel/ServiceCmd/Format 15
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.56
138 TestFunctional/parallel/ServiceCmd/URL 15
139 TestFunctional/parallel/Version/short 0.11
140 TestFunctional/parallel/Version/components 0.82
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.33
146 TestFunctional/parallel/ImageCommands/Setup 2.52
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.68
148 TestFunctional/parallel/DockerEnv/bash 1.87
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.78
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.43
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.03
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.6
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
158 TestFunctional/delete_addon-resizer_images 0.15
159 TestFunctional/delete_my-image_image 0.07
160 TestFunctional/delete_minikube_cached_images 0.06
164 TestImageBuild/serial/Setup 21.18
165 TestImageBuild/serial/NormalBuild 1.83
166 TestImageBuild/serial/BuildWithBuildArg 0.93
167 TestImageBuild/serial/BuildWithDockerIgnore 0.75
168 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
178 TestJSONOutput/start/Command 74.02
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.61
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.61
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 10.8
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.77
203 TestKicCustomNetwork/create_custom_network 22.72
204 TestKicCustomNetwork/use_default_bridge_network 22.88
205 TestKicExistingNetwork 23.37
206 TestKicCustomSubnet 23.27
207 TestKicStaticIP 23.9
208 TestMainNoArgs 0.08
209 TestMinikubeProfile 49.2
212 TestMountStart/serial/StartWithMountFirst 7.18
213 TestMountStart/serial/VerifyMountFirst 0.38
214 TestMountStart/serial/StartWithMountSecond 7.2
215 TestMountStart/serial/VerifyMountSecond 0.38
216 TestMountStart/serial/DeleteFirst 2.08
217 TestMountStart/serial/VerifyMountPostDelete 0.41
218 TestMountStart/serial/Stop 1.56
219 TestMountStart/serial/RestartStopped 8.31
220 TestMountStart/serial/VerifyMountPostStop 0.38
223 TestMultiNode/serial/FreshStart2Nodes 63.62
224 TestMultiNode/serial/DeployApp2Nodes 45.61
225 TestMultiNode/serial/PingHostFrom2Pods 1.01
226 TestMultiNode/serial/AddNode 15.06
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.43
229 TestMultiNode/serial/CopyFile 13.58
230 TestMultiNode/serial/StopNode 2.89
231 TestMultiNode/serial/StartAfterStop 13.73
232 TestMultiNode/serial/RestartKeepsNodes 99.83
233 TestMultiNode/serial/DeleteNode 5.79
234 TestMultiNode/serial/StopMultiNode 21.85
235 TestMultiNode/serial/RestartMultiNode 61.75
236 TestMultiNode/serial/ValidateNameConflict 25.41
240 TestPreload 203.15
242 TestScheduledStopUnix 95.14
243 TestSkaffold 123.52
245 TestInsufficientStorage 10.35
261 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 9.56
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.83
263 TestStoppedBinaryUpgrade/Setup 0.91
265 TestStoppedBinaryUpgrade/MinikubeLogs 3.45
267 TestPause/serial/Start 36.03
268 TestPause/serial/SecondStartNoReconfiguration 32.41
269 TestPause/serial/Pause 0.72
270 TestPause/serial/VerifyStatus 0.39
271 TestPause/serial/Unpause 0.58
272 TestPause/serial/PauseAgain 0.82
273 TestPause/serial/DeletePaused 2.54
274 TestPause/serial/VerifyDeletedResources 0.53
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
284 TestNoKubernetes/serial/StartWithK8s 22.25
285 TestNoKubernetes/serial/StartWithStopK8s 8.15
286 TestNoKubernetes/serial/Start 6.28
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
288 TestNoKubernetes/serial/ProfileList 1.23
289 TestNoKubernetes/serial/Stop 1.59
290 TestNoKubernetes/serial/StartNoArgs 7.27
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
292 TestNetworkPlugins/group/auto/Start 38.12
293 TestNetworkPlugins/group/auto/KubeletFlags 0.42
294 TestNetworkPlugins/group/auto/NetCatPod 11.2
295 TestNetworkPlugins/group/auto/DNS 0.14
296 TestNetworkPlugins/group/auto/Localhost 0.12
297 TestNetworkPlugins/group/auto/HairPin 0.12
298 TestNetworkPlugins/group/kindnet/Start 49.99
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
301 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
302 TestNetworkPlugins/group/kindnet/DNS 0.14
303 TestNetworkPlugins/group/kindnet/Localhost 0.11
304 TestNetworkPlugins/group/kindnet/HairPin 0.12
305 TestNetworkPlugins/group/flannel/Start 49.93
306 TestNetworkPlugins/group/flannel/ControllerPod 6.01
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
308 TestNetworkPlugins/group/flannel/NetCatPod 11.25
309 TestNetworkPlugins/group/flannel/DNS 0.14
310 TestNetworkPlugins/group/flannel/Localhost 0.12
311 TestNetworkPlugins/group/flannel/HairPin 0.12
312 TestNetworkPlugins/group/enable-default-cni/Start 37.55
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
318 TestNetworkPlugins/group/bridge/Start 38.88
319 TestNetworkPlugins/group/kubenet/Start 38.73
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
321 TestNetworkPlugins/group/bridge/NetCatPod 11.28
322 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
323 TestNetworkPlugins/group/kubenet/NetCatPod 10.25
324 TestNetworkPlugins/group/bridge/DNS 0.14
325 TestNetworkPlugins/group/bridge/Localhost 0.13
326 TestNetworkPlugins/group/bridge/HairPin 0.11
327 TestNetworkPlugins/group/kubenet/DNS 0.14
328 TestNetworkPlugins/group/kubenet/Localhost 0.12
329 TestNetworkPlugins/group/kubenet/HairPin 0.13
330 TestNetworkPlugins/group/custom-flannel/Start 69.52
331 TestNetworkPlugins/group/calico/Start 76.07
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
334 TestNetworkPlugins/group/calico/ControllerPod 6.01
335 TestNetworkPlugins/group/custom-flannel/DNS 0.14
336 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
337 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
338 TestNetworkPlugins/group/calico/KubeletFlags 0.39
339 TestNetworkPlugins/group/calico/NetCatPod 11.35
340 TestNetworkPlugins/group/calico/DNS 0.14
341 TestNetworkPlugins/group/calico/Localhost 0.13
342 TestNetworkPlugins/group/calico/HairPin 0.13
343 TestNetworkPlugins/group/false/Start 38.41
346 TestNetworkPlugins/group/false/KubeletFlags 0.38
347 TestNetworkPlugins/group/false/NetCatPod 11.27
348 TestNetworkPlugins/group/false/DNS 0.14
349 TestNetworkPlugins/group/false/Localhost 0.12
350 TestNetworkPlugins/group/false/HairPin 0.12
352 TestStartStop/group/no-preload/serial/FirstStart 156.27
353 TestStartStop/group/no-preload/serial/DeployApp 9.53
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
355 TestStartStop/group/no-preload/serial/Stop 10.86
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
357 TestStartStop/group/no-preload/serial/SecondStart 332.9
360 TestStartStop/group/old-k8s-version/serial/Stop 1.58
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.44
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 18.01
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
366 TestStartStop/group/no-preload/serial/Pause 3.33
368 TestStartStop/group/embed-certs/serial/FirstStart 37.14
369 TestStartStop/group/embed-certs/serial/DeployApp 8.3
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
371 TestStartStop/group/embed-certs/serial/Stop 10.93
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.48
373 TestStartStop/group/embed-certs/serial/SecondStart 314.61
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
378 TestStartStop/group/embed-certs/serial/Pause 3.26
380 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.7
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.01
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.44
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 312.09
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 20.01
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
392 TestStartStop/group/newest-cni/serial/FirstStart 35.12
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
395 TestStartStop/group/newest-cni/serial/Stop 6.01
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
397 TestStartStop/group/newest-cni/serial/SecondStart 28.23
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
401 TestStartStop/group/newest-cni/serial/Pause 3.17
x
+
TestDownloadOnly/v1.16.0/json-events (17.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (17.377767834s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-178000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-178000: exit status 85 (313.077446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:50 PST |          |
	|         | -p download-only-178000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 11:50:10
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 11:50:10.941244   11092 out.go:296] Setting OutFile to fd 1 ...
	I0103 11:50:10.941551   11092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:50:10.941557   11092 out.go:309] Setting ErrFile to fd 2...
	I0103 11:50:10.941561   11092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:50:10.941742   11092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	W0103 11:50:10.941845   11092 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: no such file or directory
	I0103 11:50:10.943674   11092 out.go:303] Setting JSON to true
	I0103 11:50:10.966361   11092 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4780,"bootTime":1704306630,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 11:50:10.966479   11092 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 11:50:10.987982   11092 out.go:97] [download-only-178000] minikube v1.32.0 on Darwin 14.2
	I0103 11:50:11.009527   11092 out.go:169] MINIKUBE_LOCATION=17885
	I0103 11:50:10.988203   11092 notify.go:220] Checking for updates...
	W0103 11:50:10.988216   11092 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball: no such file or directory
	I0103 11:50:11.051606   11092 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 11:50:11.074665   11092 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 11:50:11.095666   11092 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 11:50:11.116681   11092 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	W0103 11:50:11.158583   11092 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 11:50:11.159099   11092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 11:50:11.215662   11092 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 11:50:11.215800   11092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:50:11.320782   11092 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:50:11.310618921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:50:11.342532   11092 out.go:97] Using the docker driver based on user configuration
	I0103 11:50:11.342581   11092 start.go:298] selected driver: docker
	I0103 11:50:11.342597   11092 start.go:902] validating driver "docker" against <nil>
	I0103 11:50:11.342817   11092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:50:11.444069   11092 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:50:11.435218153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:50:11.444238   11092 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 11:50:11.447497   11092 start_flags.go:394] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I0103 11:50:11.447672   11092 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 11:50:11.469131   11092 out.go:169] Using Docker Desktop driver with root privileges
	I0103 11:50:11.490050   11092 cni.go:84] Creating CNI manager for ""
	I0103 11:50:11.490095   11092 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0103 11:50:11.490116   11092 start_flags.go:323] config:
	{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-178000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 11:50:11.512144   11092 out.go:97] Starting control plane node download-only-178000 in cluster download-only-178000
	I0103 11:50:11.512186   11092 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 11:50:11.534143   11092 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 11:50:11.534229   11092 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 11:50:11.534329   11092 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 11:50:11.587248   11092 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 11:50:11.587513   11092 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 11:50:11.587654   11092 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 11:50:11.608324   11092 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0103 11:50:11.608357   11092 cache.go:56] Caching tarball of preloaded images
	I0103 11:50:11.608641   11092 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 11:50:11.630060   11092 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0103 11:50:11.630083   11092 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:11.707177   11092 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0103 11:50:18.742646   11092 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:18.742825   11092 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:19.292505   11092 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0103 11:50:19.292740   11092 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/download-only-178000/config.json ...
	I0103 11:50:19.292764   11092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/download-only-178000/config.json: {Name:mk8abd94fe0a4fba36ac0d7f3b023978dc7337ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 11:50:19.293044   11092 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0103 11:50:19.293339   11092 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-178000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (44.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (44.100723626s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (44.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-178000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-178000: exit status 85 (318.740358ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:50 PST |          |
	|         | -p download-only-178000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:50 PST |          |
	|         | -p download-only-178000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 11:50:28
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 11:50:28.637371   11136 out.go:296] Setting OutFile to fd 1 ...
	I0103 11:50:28.637599   11136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:50:28.637604   11136 out.go:309] Setting ErrFile to fd 2...
	I0103 11:50:28.637608   11136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:50:28.637788   11136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	W0103 11:50:28.637889   11136 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: no such file or directory
	I0103 11:50:28.639099   11136 out.go:303] Setting JSON to true
	I0103 11:50:28.661979   11136 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4798,"bootTime":1704306630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 11:50:28.662081   11136 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 11:50:28.683187   11136 out.go:97] [download-only-178000] minikube v1.32.0 on Darwin 14.2
	I0103 11:50:28.704103   11136 out.go:169] MINIKUBE_LOCATION=17885
	I0103 11:50:28.683303   11136 notify.go:220] Checking for updates...
	I0103 11:50:28.745849   11136 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 11:50:28.766967   11136 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 11:50:28.787958   11136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 11:50:28.808925   11136 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	W0103 11:50:28.850938   11136 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 11:50:28.851576   11136 config.go:182] Loaded profile config "download-only-178000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0103 11:50:28.851634   11136 start.go:810] api.Load failed for download-only-178000: filestore "download-only-178000": Docker machine "download-only-178000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 11:50:28.851763   11136 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 11:50:28.851794   11136 start.go:810] api.Load failed for download-only-178000: filestore "download-only-178000": Docker machine "download-only-178000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 11:50:28.907343   11136 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 11:50:28.907496   11136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:50:29.008454   11136 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:50:28.999863359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:50:29.029763   11136 out.go:97] Using the docker driver based on existing profile
	I0103 11:50:29.029820   11136 start.go:298] selected driver: docker
	I0103 11:50:29.029838   11136 start.go:902] validating driver "docker" against &{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-178000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 11:50:29.030077   11136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:50:29.132228   11136 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:50:29.123984885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:50:29.135401   11136 cni.go:84] Creating CNI manager for ""
	I0103 11:50:29.135427   11136 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 11:50:29.135440   11136 start_flags.go:323] config:
	{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-178000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 11:50:29.156841   11136 out.go:97] Starting control plane node download-only-178000 in cluster download-only-178000
	I0103 11:50:29.156865   11136 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 11:50:29.177934   11136 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 11:50:29.178035   11136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 11:50:29.178132   11136 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 11:50:29.229775   11136 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0103 11:50:29.229801   11136 cache.go:56] Caching tarball of preloaded images
	I0103 11:50:29.230005   11136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 11:50:29.230968   11136 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 11:50:29.231077   11136 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 11:50:29.231096   11136 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 11:50:29.231103   11136 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 11:50:29.231112   11136 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 11:50:29.250590   11136 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0103 11:50:29.250653   11136 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:29.324996   11136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0103 11:50:35.467698   11136 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:35.467903   11136 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:50:36.165477   11136 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0103 11:50:36.165563   11136 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/download-only-178000/config.json ...
	I0103 11:50:36.165913   11136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0103 11:50:36.166171   11136 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-178000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (43.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (43.132124293s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (43.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-178000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-178000: exit status 85 (336.71212ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:50 PST |          |
	|         | -p download-only-178000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:50 PST |          |
	|         | -p download-only-178000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-178000 | jenkins | v1.32.0 | 03 Jan 24 11:51 PST |          |
	|         | -p download-only-178000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 11:51:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 11:51:13.055620   11179 out.go:296] Setting OutFile to fd 1 ...
	I0103 11:51:13.055824   11179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:51:13.055830   11179 out.go:309] Setting ErrFile to fd 2...
	I0103 11:51:13.055834   11179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 11:51:13.056021   11179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	W0103 11:51:13.056119   11179 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17885-10646/.minikube/config/config.json: no such file or directory
	I0103 11:51:13.057395   11179 out.go:303] Setting JSON to true
	I0103 11:51:13.079897   11179 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4843,"bootTime":1704306630,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 11:51:13.080010   11179 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 11:51:13.102149   11179 out.go:97] [download-only-178000] minikube v1.32.0 on Darwin 14.2
	I0103 11:51:13.123708   11179 out.go:169] MINIKUBE_LOCATION=17885
	I0103 11:51:13.102377   11179 notify.go:220] Checking for updates...
	I0103 11:51:13.166842   11179 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 11:51:13.188710   11179 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 11:51:13.209664   11179 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 11:51:13.232833   11179 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	W0103 11:51:13.282440   11179 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 11:51:13.283250   11179 config.go:182] Loaded profile config "download-only-178000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0103 11:51:13.283346   11179 start.go:810] api.Load failed for download-only-178000: filestore "download-only-178000": Docker machine "download-only-178000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 11:51:13.283520   11179 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 11:51:13.283568   11179 start.go:810] api.Load failed for download-only-178000: filestore "download-only-178000": Docker machine "download-only-178000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 11:51:13.340562   11179 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 11:51:13.340703   11179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:51:13.445531   11179 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:51:13.435973109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:51:13.466851   11179 out.go:97] Using the docker driver based on existing profile
	I0103 11:51:13.466923   11179 start.go:298] selected driver: docker
	I0103 11:51:13.466933   11179 start.go:902] validating driver "docker" against &{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-178000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 11:51:13.467207   11179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 11:51:13.570550   11179 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-03 19:51:13.56123891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 11:51:13.573742   11179 cni.go:84] Creating CNI manager for ""
	I0103 11:51:13.573766   11179 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0103 11:51:13.573783   11179 start_flags.go:323] config:
	{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-178000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs
:}
	I0103 11:51:13.595522   11179 out.go:97] Starting control plane node download-only-178000 in cluster download-only-178000
	I0103 11:51:13.595564   11179 cache.go:121] Beginning downloading kic base image for docker with docker
	I0103 11:51:13.617393   11179 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 11:51:13.617448   11179 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 11:51:13.617551   11179 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 11:51:13.669449   11179 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 11:51:13.669876   11179 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 11:51:13.669898   11179 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 11:51:13.669904   11179 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 11:51:13.669911   11179 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 11:51:13.693774   11179 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0103 11:51:13.693832   11179 cache.go:56] Caching tarball of preloaded images
	I0103 11:51:13.694172   11179 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 11:51:13.715555   11179 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0103 11:51:13.715585   11179 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:51:13.795729   11179 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0103 11:51:19.578831   11179 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:51:19.579025   11179 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0103 11:51:20.182077   11179 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0103 11:51:20.182158   11179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/download-only-178000/config.json ...
	I0103 11:51:20.182573   11179 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0103 11:51:20.182795   11179 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-178000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-178000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.97s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-006000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-006000
--- PASS: TestDownloadOnlyKic (1.97s)

                                                
                                    
x
+
TestBinaryMirror (1.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-957000 --alsologtostderr --binary-mirror http://127.0.0.1:57208 --driver=docker 
aaa_download_only_test.go:307: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-957000 --alsologtostderr --binary-mirror http://127.0.0.1:57208 --driver=docker : (1.007848056s)
helpers_test.go:175: Cleaning up "binary-mirror-957000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-957000
--- PASS: TestBinaryMirror (1.64s)

                                                
                                    
x
+
TestOffline (42.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-090000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-090000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (39.99862536s)
helpers_test.go:175: Cleaning up "offline-docker-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-090000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-090000: (2.825076466s)
--- PASS: TestOffline (42.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-927000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-927000: exit status 85 (192.900384ms)

                                                
                                                
-- stdout --
	* Profile "addons-927000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-927000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-927000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-927000: exit status 85 (213.173946ms)

                                                
                                                
-- stdout --
	* Profile "addons-927000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-927000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (157.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-927000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-927000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.591841664s)
--- PASS: TestAddons/Setup (157.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-srp9w" [1b5c4968-684e-4fbc-b28e-f6169ff61e2c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004166229s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-927000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-927000: (5.834203983s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.295739ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8gdfc" [96d34b31-f774-490f-bb0e-3e7e8241adf4] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005835447s
addons_test.go:415: (dbg) Run:  kubectl --context addons-927000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.756623ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-p787c" [0354428c-034d-42e4-9df8-4629ae75eb3c] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.051138868s
addons_test.go:473: (dbg) Run:  kubectl --context addons-927000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-927000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.339205288s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 24.976926ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-927000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-927000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fb316368-30d0-4b2b-baac-abcc2022ee15] Pending
helpers_test.go:344: "task-pv-pod" [fb316368-30d0-4b2b-baac-abcc2022ee15] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fb316368-30d0-4b2b-baac-abcc2022ee15] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004822099s
addons_test.go:584: (dbg) Run:  kubectl --context addons-927000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-927000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-927000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-927000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-927000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-927000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-927000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4bbcc916-d00c-4acd-b35f-91a99398d6c7] Pending
helpers_test.go:344: "task-pv-pod-restore" [4bbcc916-d00c-4acd-b35f-91a99398d6c7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4bbcc916-d00c-4acd-b35f-91a99398d6c7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003665514s
addons_test.go:626: (dbg) Run:  kubectl --context addons-927000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-927000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-927000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-927000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.788596686s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-927000 addons disable volumesnapshots --alsologtostderr -v=1: (1.067396998s)
--- PASS: TestAddons/parallel/CSI (66.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-927000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-927000 --alsologtostderr -v=1: (1.522995735s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-zsp4b" [dfb4f46c-84ab-44c7-b484-468c6503224e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-zsp4b" [dfb4f46c-84ab-44c7-b484-468c6503224e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005695758s
--- PASS: TestAddons/parallel/Headlamp (14.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-5tg7m" [d31953d3-05d3-4b04-80df-a418b8d6c7c2] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005258504s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-927000
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-927000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-927000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [87d1b36a-9203-49a5-b4a9-0521de94f554] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [87d1b36a-9203-49a5-b4a9-0521de94f554] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [87d1b36a-9203-49a5-b4a9-0521de94f554] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004958628s
addons_test.go:891: (dbg) Run:  kubectl --context addons-927000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 ssh "cat /opt/local-path-provisioner/pvc-abcfb675-7719-4c3e-8244-295416ed7a5a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-927000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-927000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-927000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.305779283s)
--- PASS: TestAddons/parallel/LocalPath (54.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zmtfc" [92eaef0b-4bff-46d0-9990-885d872a41a1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00526834s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-927000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-dr4fc" [399b1b0d-cf5b-473d-9337-0fbd7880f399] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.048415301s
--- PASS: TestAddons/parallel/Yakd (6.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-927000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-927000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-927000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-927000: (11.048374085s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-927000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-927000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-927000
--- PASS: TestAddons/StoppedEnableDisable (11.77s)

                                                
                                    
x
+
TestCertOptions (24.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-925000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0103 12:29:58.392312   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-925000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (21.188172648s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-925000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-925000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-925000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-925000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-925000: (2.219329795s)
--- PASS: TestCertOptions (24.22s)

                                                
                                    
x
+
TestCertExpiration (227s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=3m --driver=docker 
E0103 12:29:39.405014   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=3m --driver=docker : (22.582312198s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0103 12:32:56.642498   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.648094   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.660243   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.682442   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.722897   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.805111   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:56.967221   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:57.288375   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:57.930685   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:32:59.210972   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:33:01.771056   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:33:06.892056   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=8760h --driver=docker : (21.96203839s)
helpers_test.go:175: Cleaning up "cert-expiration-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-730000
E0103 12:33:17.133606   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-730000: (2.453506373s)
--- PASS: TestCertExpiration (227.00s)

                                                
                                    
x
+
TestDockerFlags (25.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-125000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-125000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (22.243785008s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-125000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-125000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-125000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-125000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-125000: (2.391595287s)
--- PASS: TestDockerFlags (25.67s)

                                                
                                    
x
+
TestForceSystemdFlag (26.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-486000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-486000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.570385112s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-486000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-486000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-486000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-486000: (2.510736124s)
--- PASS: TestForceSystemdFlag (26.56s)

                                                
                                    
x
+
TestForceSystemdEnv (26.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-823000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-823000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (23.546534399s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-823000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-823000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-823000: (2.656584052s)
--- PASS: TestForceSystemdEnv (26.69s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (12.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (12.44s)

                                                
                                    
x
+
TestErrorSpam/setup (21.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-264000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-264000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 --driver=docker : (21.798858782s)
--- PASS: TestErrorSpam/setup (21.80s)

                                                
                                    
x
+
TestErrorSpam/start (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 start --dry-run
--- PASS: TestErrorSpam/start (2.06s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (12.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 stop: (11.920862034s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-264000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-264000 stop
--- PASS: TestErrorSpam/stop (12.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/test/nested/copy/11090/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (36.978643502s)
--- PASS: TestFunctional/serial/StartWithProxy (36.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --alsologtostderr -v=8: (39.418240689s)
functional_test.go:659: soft start took 39.418696268s for "functional-307000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-307000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.1: (1.297847862s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.3: (1.226384303s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:latest: (1.110942248s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3271532757/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add minikube-local-cache-test:functional-307000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add minikube-local-cache-test:functional-307000: (1.058405536s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache delete minikube-local-cache-test:functional-307000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-307000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (388.173111ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 kubectl -- --context functional-307000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-307000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0103 11:59:39.361997   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.369132   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.380100   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.402300   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.442441   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.523468   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:39.684259   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:40.004648   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:40.644924   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:41.925064   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 11:59:44.485263   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.361890774s)
functional_test.go:757: restart took 37.362030603s for "functional-307000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-307000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 logs: (3.082680478s)
--- PASS: TestFunctional/serial/LogsCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1160770267/001/logs.txt
E0103 11:59:49.606004   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1160770267/001/logs.txt: (3.020207319s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.02s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-307000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-307000: exit status 115 (567.521018ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32648 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-307000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 config get cpus: exit status 14 (73.660212ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 config get cpus: exit status 14 (58.833934ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-307000 --alsologtostderr -v=1]
2024/01/03 12:01:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-307000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 13476: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (612.38832ms)

                                                
                                                
-- stdout --
	* [functional-307000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:01:04.771932   13415 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:01:04.772127   13415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:01:04.772133   13415 out.go:309] Setting ErrFile to fd 2...
	I0103 12:01:04.772137   13415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:01:04.772329   13415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:01:04.773717   13415 out.go:303] Setting JSON to false
	I0103 12:01:04.796156   13415 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5434,"bootTime":1704306630,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:01:04.796277   13415 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:01:04.817974   13415 out.go:177] * [functional-307000] minikube v1.32.0 on Darwin 14.2
	I0103 12:01:04.860096   13415 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:01:04.860160   13415 notify.go:220] Checking for updates...
	I0103 12:01:04.881903   13415 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:01:04.902932   13415 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:01:04.923889   13415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:01:04.944826   13415 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:01:04.966152   13415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:01:04.988635   13415 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:01:04.989371   13415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:01:05.046574   13415 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:01:05.046753   13415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:01:05.152157   13415 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:01:05.142208705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:01:05.199043   13415 out.go:177] * Using the docker driver based on existing profile
	I0103 12:01:05.219884   13415 start.go:298] selected driver: docker
	I0103 12:01:05.219914   13415 start.go:902] validating driver "docker" against &{Name:functional-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-307000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:01:05.220035   13415 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:01:05.244670   13415 out.go:177] 
	W0103 12:01:05.265770   13415 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0103 12:01:05.286807   13415 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (731.140465ms)

                                                
                                                
-- stdout --
	* [functional-307000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:01:04.037437   13391 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:01:04.037623   13391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:01:04.037629   13391 out.go:309] Setting ErrFile to fd 2...
	I0103 12:01:04.037633   13391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:01:04.037855   13391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:01:04.039849   13391 out.go:303] Setting JSON to false
	I0103 12:01:04.064153   13391 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5434,"bootTime":1704306630,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0103 12:01:04.064254   13391 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0103 12:01:04.085785   13391 out.go:177] * [functional-307000] minikube v1.32.0 sur Darwin 14.2
	I0103 12:01:04.148465   13391 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 12:01:04.127660   13391 notify.go:220] Checking for updates...
	I0103 12:01:04.190493   13391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	I0103 12:01:04.232486   13391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0103 12:01:04.274414   13391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 12:01:04.316736   13391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	I0103 12:01:04.358586   13391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 12:01:04.379783   13391 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:01:04.380180   13391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 12:01:04.437225   13391 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0103 12:01:04.437387   13391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 12:01:04.541375   13391 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-03 20:01:04.530625887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0103 12:01:04.562936   13391 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0103 12:01:04.604732   13391 start.go:298] selected driver: docker
	I0103 12:01:04.604762   13391 start.go:902] validating driver "docker" against &{Name:functional-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-307000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 12:01:04.604884   13391 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 12:01:04.630849   13391 out.go:177] 
	W0103 12:01:04.651806   13391 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0103 12:01:04.672646   13391 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3259f1c1-4f7c-41c1-82be-ee09777f2b9c] Running
E0103 11:59:59.847203   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005425315s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-307000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-307000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [33a8cf88-50ba-4734-97cc-307985a0e29b] Pending
helpers_test.go:344: "sp-pod" [33a8cf88-50ba-4734-97cc-307985a0e29b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [33a8cf88-50ba-4734-97cc-307985a0e29b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005170255s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-307000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-307000 delete -f testdata/storage-provisioner/pod.yaml
E0103 12:00:20.327297   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-307000 delete -f testdata/storage-provisioner/pod.yaml: (1.213719367s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a7625e26-f0f4-4a5f-8c51-bd0f56c642f1] Pending
helpers_test.go:344: "sp-pod" [a7625e26-f0f4-4a5f-8c51-bd0f56c642f1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a7625e26-f0f4-4a5f-8c51-bd0f56c642f1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005128029s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-307000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp functional-307000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd2537555423/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-307000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-n42h2" [98ab58a3-f341-4d5c-9b11-e94f5d6ad439] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-n42h2" [98ab58a3-f341-4d5c-9b11-e94f5d6ad439] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.003825229s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-307000 exec mysql-859648c796-n42h2 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-307000 exec mysql-859648c796-n42h2 -- mysql -ppassword -e "show databases;": exit status 1 (149.607286ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-307000 exec mysql-859648c796-n42h2 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-307000 exec mysql-859648c796-n42h2 -- mysql -ppassword -e "show databases;": exit status 1 (117.309401ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-307000 exec mysql-859648c796-n42h2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/11090/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/test/nested/copy/11090/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/11090.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/11090.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/11090.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /usr/share/ca-certificates/11090.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/110902.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/110902.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/110902.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /usr/share/ca-certificates/110902.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-307000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "sudo systemctl is-active crio": exit status 1 (416.978781ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12847: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b469b4b3-2308-485b-b56c-10710a716b36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b469b4b3-2308-485b-b56c-10710a716b36] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004232928s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-307000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12901: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-307000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-307000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-slvfr" [e50ae67b-484b-4b91-a70c-623cb60df6dd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-slvfr" [e50ae67b-484b-4b91-a70c-623cb60df6dd] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.006779148s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "393.380902ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "81.147714ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "394.93817ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "82.845742ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service list -o json
functional_test.go:1493: Took "605.905887ms" to run "out/minikube-darwin-amd64 -p functional-307000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service --namespace=default --https --url hello-node: signal: killed (15.00221552s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58133

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:58133
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service hello-node --url --format={{.IP}}: signal: killed (15.003748599s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1: exit status 1 (495.89035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0103 12:01:01.286538   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-307000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup201143839/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service hello-node --url: signal: killed (15.003813799s)

                                                
                                                
-- stdout --
	http://127.0.0.1:58176

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:58176
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-307000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-307000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr:
I0103 12:01:45.226150   13790 out.go:296] Setting OutFile to fd 1 ...
I0103 12:01:45.226370   13790 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:45.226377   13790 out.go:309] Setting ErrFile to fd 2...
I0103 12:01:45.226381   13790 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:45.226599   13790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:01:45.227231   13790 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:45.227325   13790 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:45.227708   13790 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0103 12:01:45.283049   13790 ssh_runner.go:195] Run: systemctl --version
I0103 12:01:45.283140   13790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0103 12:01:45.340173   13790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57887 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/functional-307000/id_rsa Username:docker}
I0103 12:01:45.428035   13790 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-307000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-307000 | 548bf4506c62e | 1.24MB |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-307000 | fe96bb9f94c1b | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr:
I0103 12:01:49.499003   13830 out.go:296] Setting OutFile to fd 1 ...
I0103 12:01:49.499259   13830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:49.499265   13830 out.go:309] Setting ErrFile to fd 2...
I0103 12:01:49.499270   13830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:49.499482   13830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:01:49.500100   13830 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:49.500191   13830 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:49.500600   13830 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0103 12:01:49.556251   13830 ssh_runner.go:195] Run: systemctl --version
I0103 12:01:49.556325   13830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0103 12:01:49.610349   13830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57887 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/functional-307000/id_rsa Username:docker}
I0103 12:01:49.694277   13830 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr:
[{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"548bf4506c62ea06f4a1860e7155ef602811735a11e712c7504efe46e6273ff0","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-307000"],"size":"1240000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b864483
9d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":
["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"fe96bb9f94c1bd373708204f27f6ea8465e305dfe84e6fbfed803545ca4e058b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-307000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functiona
l-307000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr:
I0103 12:01:49.193362   13824 out.go:296] Setting OutFile to fd 1 ...
I0103 12:01:49.193598   13824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:49.193604   13824 out.go:309] Setting ErrFile to fd 2...
I0103 12:01:49.193608   13824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:49.193820   13824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:01:49.194433   13824 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:49.194545   13824 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:49.194934   13824 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0103 12:01:49.247108   13824 ssh_runner.go:195] Run: systemctl --version
I0103 12:01:49.247183   13824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0103 12:01:49.302283   13824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57887 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/functional-307000/id_rsa Username:docker}
I0103 12:01:49.385503   13824 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: fe96bb9f94c1bd373708204f27f6ea8465e305dfe84e6fbfed803545ca4e058b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-307000
size: "30"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-307000
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr:
I0103 12:01:45.544813   13796 out.go:296] Setting OutFile to fd 1 ...
I0103 12:01:45.545096   13796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:45.545105   13796 out.go:309] Setting ErrFile to fd 2...
I0103 12:01:45.545110   13796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:45.545334   13796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:01:45.546032   13796 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:45.546134   13796 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:45.546674   13796 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0103 12:01:45.607072   13796 ssh_runner.go:195] Run: systemctl --version
I0103 12:01:45.607158   13796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0103 12:01:45.663366   13796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57887 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/functional-307000/id_rsa Username:docker}
I0103 12:01:45.751858   13796 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh pgrep buildkitd: exit status 1 (408.954784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr: (2.609332552s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr:
I0103 12:01:46.277746   13812 out.go:296] Setting OutFile to fd 1 ...
I0103 12:01:46.278419   13812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:46.278430   13812 out.go:309] Setting ErrFile to fd 2...
I0103 12:01:46.278435   13812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:01:46.278774   13812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:01:46.279637   13812 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:46.280574   13812 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 12:01:46.281336   13812 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0103 12:01:46.349781   13812 ssh_runner.go:195] Run: systemctl --version
I0103 12:01:46.349874   13812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0103 12:01:46.417772   13812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57887 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/functional-307000/id_rsa Username:docker}
I0103 12:01:46.504926   13812 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1823641468.tar
I0103 12:01:46.505028   13812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0103 12:01:46.514769   13812 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1823641468.tar
I0103 12:01:46.520195   13812 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1823641468.tar: stat -c "%s %y" /var/lib/minikube/build/build.1823641468.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1823641468.tar': No such file or directory
I0103 12:01:46.520241   13812 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1823641468.tar --> /var/lib/minikube/build/build.1823641468.tar (3072 bytes)
I0103 12:01:46.544107   13812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1823641468
I0103 12:01:46.553899   13812 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1823641468 -xf /var/lib/minikube/build/build.1823641468.tar
I0103 12:01:46.564450   13812 docker.go:346] Building image: /var/lib/minikube/build/build.1823641468
I0103 12:01:46.564541   13812 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-307000 /var/lib/minikube/build/build.1823641468
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:548bf4506c62ea06f4a1860e7155ef602811735a11e712c7504efe46e6273ff0 done
#8 naming to localhost/my-image:functional-307000 done
#8 DONE 0.0s
I0103 12:01:48.769202   13812 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-307000 /var/lib/minikube/build/build.1823641468: (2.204681237s)
I0103 12:01:48.769290   13812 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1823641468
I0103 12:01:48.779606   13812 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1823641468.tar
I0103 12:01:48.788009   13812 build_images.go:207] Built localhost/my-image:functional-307000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1823641468.tar
I0103 12:01:48.788041   13812 build_images.go:123] succeeded building to: functional-307000
I0103 12:01:48.788045   13812 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.438292557s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-307000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr: (4.385893706s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-307000 docker-env) && out/minikube-darwin-amd64 status -p functional-307000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-307000 docker-env) && out/minikube-darwin-amd64 status -p functional-307000": (1.165737621s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-307000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr: (2.39946531s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.235411354s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-307000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image load --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr: (4.802251552s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image save gcr.io/google-containers/addon-resizer:functional-307000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image save gcr.io/google-containers/addon-resizer:functional-307000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.026792216s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image rm gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.284322558s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-307000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image save --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image save --daemon gcr.io/google-containers/addon-resizer:functional-307000 --alsologtostderr: (1.442132227s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-307000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-307000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-307000
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-307000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-715000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-715000 --driver=docker : (21.175661361s)
--- PASS: TestImageBuild/serial/Setup (21.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-715000
E0103 12:02:23.204870   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-715000: (1.826250613s)
--- PASS: TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-715000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-715000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-715000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-806000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0103 12:10:26.030258   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-806000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m14.021346525s)
--- PASS: TestJSONOutput/start/Command (74.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-806000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-806000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-806000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-806000 --output=json --user=testUser: (10.798704879s)
--- PASS: TestJSONOutput/stop/Command (10.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-763000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-763000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (379.567513ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"943dbf70-6c9d-45e0-8eb2-366b09d97d79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-763000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0987a432-42ec-4969-b427-837f81768b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"cfcff847-cca6-43a7-9ea2-48cb5f84e7c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig"}}
	{"specversion":"1.0","id":"2a92a169-8e1a-4735-900d-e4839017a7b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"07ac3ec3-abd4-4d2b-b7d5-6177560e9ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99ad94dd-35e9-4210-a8e7-f3c11578d0c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube"}}
	{"specversion":"1.0","id":"c49881eb-0038-4c17-b32a-28a518e6d075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"11219548-8f14-4df1-9ba5-8736972b7491","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-763000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-807000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-807000 --network=: (20.453322498s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-807000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-807000: (2.216634404s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-950000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-950000 --network=bridge: (20.706102675s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-950000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-950000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-950000: (2.118262111s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.88s)

                                                
                                    
x
+
TestKicExistingNetwork (23.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-656000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-656000 --network=existing-network: (20.767091554s)
helpers_test.go:175: Cleaning up "existing-network-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-656000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-656000: (2.258615902s)
--- PASS: TestKicExistingNetwork (23.37s)

                                                
                                    
x
+
TestKicCustomSubnet (23.27s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-437000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-437000 --subnet=192.168.60.0/24: (20.811186017s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-437000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-437000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-437000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-437000: (2.4083628s)
--- PASS: TestKicCustomSubnet (23.27s)

                                                
                                    
x
+
TestKicStaticIP (23.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-251000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-251000 --static-ip=192.168.200.200: (21.189126544s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-251000 ip
helpers_test.go:175: Cleaning up "static-ip-251000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-251000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-251000: (2.462638906s)
--- PASS: TestKicStaticIP (23.90s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (49.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-329000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-329000 --driver=docker : (20.891130554s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-331000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-331000 --driver=docker : (21.791521991s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-329000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-331000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-331000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-331000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-331000: (2.411913231s)
helpers_test.go:175: Cleaning up "first-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-329000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-329000: (2.454332355s)
--- PASS: TestMinikubeProfile (49.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-014000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-014000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.177393763s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-014000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-025000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-025000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.197578216s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-025000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-014000 --alsologtostderr -v=5
E0103 12:14:39.340390   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-014000 --alsologtostderr -v=5: (2.075026421s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-025000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-025000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-025000: (1.559302359s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-025000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-025000: (7.310095526s)
--- PASS: TestMountStart/serial/RestartStopped (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-025000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-576000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0103 12:14:58.326860   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-576000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m2.87177746s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (45.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-576000 -- rollout status deployment/busybox: (2.953458325s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0103 12:16:02.384812   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-m6lcg -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-xn76r -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-m6lcg -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-xn76r -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-m6lcg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-xn76r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (45.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-m6lcg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-m6lcg -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-xn76r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-576000 -- exec busybox-5bc68d56bd-xn76r -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-576000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-576000 -v 3 --alsologtostderr: (13.993546393s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr: (1.068258754s)
--- PASS: TestMultiNode/serial/AddNode (15.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-576000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp testdata/cp-test.txt multinode-576000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1550946813/001/cp-test_multinode-576000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000:/home/docker/cp-test.txt multinode-576000-m02:/home/docker/cp-test_multinode-576000_multinode-576000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test_multinode-576000_multinode-576000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000:/home/docker/cp-test.txt multinode-576000-m03:/home/docker/cp-test_multinode-576000_multinode-576000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test_multinode-576000_multinode-576000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp testdata/cp-test.txt multinode-576000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1550946813/001/cp-test_multinode-576000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m02:/home/docker/cp-test.txt multinode-576000:/home/docker/cp-test_multinode-576000-m02_multinode-576000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test_multinode-576000-m02_multinode-576000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m02:/home/docker/cp-test.txt multinode-576000-m03:/home/docker/cp-test_multinode-576000-m02_multinode-576000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test_multinode-576000-m02_multinode-576000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp testdata/cp-test.txt multinode-576000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1550946813/001/cp-test_multinode-576000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m03:/home/docker/cp-test.txt multinode-576000:/home/docker/cp-test_multinode-576000-m03_multinode-576000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000 "sudo cat /home/docker/cp-test_multinode-576000-m03_multinode-576000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 cp multinode-576000-m03:/home/docker/cp-test.txt multinode-576000-m02:/home/docker/cp-test_multinode-576000-m03_multinode-576000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 ssh -n multinode-576000-m02 "sudo cat /home/docker/cp-test_multinode-576000-m03_multinode-576000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-576000 node stop m03: (1.494909569s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-576000 status: exit status 7 (693.120203ms)

                                                
                                                
-- stdout --
	multinode-576000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-576000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-576000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr: exit status 7 (698.127577ms)

                                                
                                                
-- stdout --
	multinode-576000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-576000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-576000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:17:15.377775   17059 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:17:15.377997   17059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:17:15.378002   17059 out.go:309] Setting ErrFile to fd 2...
	I0103 12:17:15.378006   17059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:17:15.378196   17059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:17:15.378397   17059 out.go:303] Setting JSON to false
	I0103 12:17:15.378425   17059 mustload.go:65] Loading cluster: multinode-576000
	I0103 12:17:15.378457   17059 notify.go:220] Checking for updates...
	I0103 12:17:15.378774   17059 config.go:182] Loaded profile config "multinode-576000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:17:15.378786   17059 status.go:255] checking status of multinode-576000 ...
	I0103 12:17:15.379201   17059 cli_runner.go:164] Run: docker container inspect multinode-576000 --format={{.State.Status}}
	I0103 12:17:15.430650   17059 status.go:330] multinode-576000 host status = "Running" (err=<nil>)
	I0103 12:17:15.430696   17059 host.go:66] Checking if "multinode-576000" exists ...
	I0103 12:17:15.430953   17059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-576000
	I0103 12:17:15.482257   17059 host.go:66] Checking if "multinode-576000" exists ...
	I0103 12:17:15.482533   17059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:17:15.482591   17059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-576000
	I0103 12:17:15.533851   17059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58706 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/multinode-576000/id_rsa Username:docker}
	I0103 12:17:15.618187   17059 ssh_runner.go:195] Run: systemctl --version
	I0103 12:17:15.622717   17059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:17:15.632884   17059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-576000
	I0103 12:17:15.685144   17059 kubeconfig.go:92] found "multinode-576000" server: "https://127.0.0.1:58710"
	I0103 12:17:15.685173   17059 api_server.go:166] Checking apiserver status ...
	I0103 12:17:15.685217   17059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 12:17:15.695471   17059 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup
	W0103 12:17:15.704125   17059 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0103 12:17:15.704186   17059 ssh_runner.go:195] Run: ls
	I0103 12:17:15.708113   17059 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58710/healthz ...
	I0103 12:17:15.714009   17059 api_server.go:279] https://127.0.0.1:58710/healthz returned 200:
	ok
	I0103 12:17:15.714029   17059 status.go:421] multinode-576000 apiserver status = Running (err=<nil>)
	I0103 12:17:15.714043   17059 status.go:257] multinode-576000 status: &{Name:multinode-576000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 12:17:15.714057   17059 status.go:255] checking status of multinode-576000-m02 ...
	I0103 12:17:15.714295   17059 cli_runner.go:164] Run: docker container inspect multinode-576000-m02 --format={{.State.Status}}
	I0103 12:17:15.766603   17059 status.go:330] multinode-576000-m02 host status = "Running" (err=<nil>)
	I0103 12:17:15.766630   17059 host.go:66] Checking if "multinode-576000-m02" exists ...
	I0103 12:17:15.766899   17059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-576000-m02
	I0103 12:17:15.818238   17059 host.go:66] Checking if "multinode-576000-m02" exists ...
	I0103 12:17:15.818490   17059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 12:17:15.818551   17059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-576000-m02
	I0103 12:17:15.869783   17059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58747 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/multinode-576000-m02/id_rsa Username:docker}
	I0103 12:17:15.954846   17059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 12:17:15.965616   17059 status.go:257] multinode-576000-m02 status: &{Name:multinode-576000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0103 12:17:15.965636   17059 status.go:255] checking status of multinode-576000-m03 ...
	I0103 12:17:15.965889   17059 cli_runner.go:164] Run: docker container inspect multinode-576000-m03 --format={{.State.Status}}
	I0103 12:17:16.017675   17059 status.go:330] multinode-576000-m03 host status = "Stopped" (err=<nil>)
	I0103 12:17:16.017711   17059 status.go:343] host is not running, skipping remaining checks
	I0103 12:17:16.017723   17059 status.go:257] multinode-576000-m03 status: &{Name:multinode-576000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-576000 node start m03 --alsologtostderr: (12.726202066s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-576000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-576000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-576000: (22.917871579s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-576000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-576000 --wait=true -v=8 --alsologtostderr: (1m16.793730872s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-576000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-576000 node delete m03: (4.971414021s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-576000 stop: (21.522227942s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-576000 status: exit status 7 (162.15231ms)

                                                
                                                
-- stdout --
	multinode-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-576000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr: exit status 7 (161.938804ms)

                                                
                                                
-- stdout --
	multinode-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-576000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 12:19:37.196097   17544 out.go:296] Setting OutFile to fd 1 ...
	I0103 12:19:37.196404   17544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:19:37.196411   17544 out.go:309] Setting ErrFile to fd 2...
	I0103 12:19:37.196415   17544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 12:19:37.196608   17544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
	I0103 12:19:37.196784   17544 out.go:303] Setting JSON to false
	I0103 12:19:37.196809   17544 mustload.go:65] Loading cluster: multinode-576000
	I0103 12:19:37.196840   17544 notify.go:220] Checking for updates...
	I0103 12:19:37.197180   17544 config.go:182] Loaded profile config "multinode-576000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0103 12:19:37.197193   17544 status.go:255] checking status of multinode-576000 ...
	I0103 12:19:37.197610   17544 cli_runner.go:164] Run: docker container inspect multinode-576000 --format={{.State.Status}}
	I0103 12:19:37.249173   17544 status.go:330] multinode-576000 host status = "Stopped" (err=<nil>)
	I0103 12:19:37.249193   17544 status.go:343] host is not running, skipping remaining checks
	I0103 12:19:37.249198   17544 status.go:257] multinode-576000 status: &{Name:multinode-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 12:19:37.249217   17544 status.go:255] checking status of multinode-576000-m02 ...
	I0103 12:19:37.249451   17544 cli_runner.go:164] Run: docker container inspect multinode-576000-m02 --format={{.State.Status}}
	I0103 12:19:37.300712   17544 status.go:330] multinode-576000-m02 host status = "Stopped" (err=<nil>)
	I0103 12:19:37.300761   17544 status.go:343] host is not running, skipping remaining checks
	I0103 12:19:37.300770   17544 status.go:257] multinode-576000-m02 status: &{Name:multinode-576000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-576000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0103 12:19:39.414603   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:19:58.401345   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-576000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m0.901243744s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-576000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-576000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-576000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-576000-m02 --driver=docker : exit status 14 (423.26236ms)

                                                
                                                
-- stdout --
	* [multinode-576000-m02] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-576000-m02' is duplicated with machine name 'multinode-576000-m02' in profile 'multinode-576000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-576000-m03 --driver=docker 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-576000-m03 --driver=docker : (22.016190105s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-576000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-576000: exit status 80 (474.096598ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-576000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-576000-m03 already exists in multinode-576000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-576000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-576000-m03: (2.436469233s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.41s)

                                                
                                    
x
+
TestPreload (203.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-356000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0103 12:21:21.457463   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-356000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (2m17.851028204s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-356000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-356000 image pull gcr.io/k8s-minikube/busybox: (1.432470186s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-356000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-356000: (10.959519852s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-356000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-356000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (50.1416235s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-356000 image list
helpers_test.go:175: Cleaning up "test-preload-356000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-356000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-356000: (2.475881102s)
--- PASS: TestPreload (203.15s)

                                                
                                    
x
+
TestScheduledStopUnix (95.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-328000 --memory=2048 --driver=docker 
E0103 12:24:39.410211   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-328000 --memory=2048 --driver=docker : (21.100562241s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-328000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-328000 -n scheduled-stop-328000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-328000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-328000 --cancel-scheduled
E0103 12:24:58.398672   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-328000 -n scheduled-stop-328000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-328000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-328000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-328000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-328000: exit status 7 (112.932252ms)

                                                
                                                
-- stdout --
	scheduled-stop-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-328000 -n scheduled-stop-328000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-328000 -n scheduled-stop-328000: exit status 7 (111.03511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-328000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-328000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-328000: (2.150058175s)
--- PASS: TestScheduledStopUnix (95.14s)

                                                
                                    
x
+
TestSkaffold (123.52s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe445315315 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-736000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-736000 --memory=2600 --driver=docker : (20.935606352s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe445315315 run --minikube-profile skaffold-736000 --kube-context skaffold-736000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe445315315 run --minikube-profile skaffold-736000 --kube-context skaffold-736000 --status-check=true --port-forward=false --interactive=false: (1m25.306082106s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6988566b7-f8psw" [83b466de-f19c-48ee-a14a-14e792e1d600] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.006689434s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5b97596f4c-gnlxh" [9e0f9267-94e2-4ef0-88d4-b333e5756ac9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003337961s
helpers_test.go:175: Cleaning up "skaffold-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-736000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-736000: (3.046533918s)
--- PASS: TestSkaffold (123.52s)

                                                
                                    
x
+
TestInsufficientStorage (10.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-657000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-657000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.391445313s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d0ebd03-6bcc-45ca-b515-be72bc84b867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-657000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24d12789-5213-42e0-9b9f-c529e0bce1e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"2f3f8ed4-b2fb-424f-8a08-60c30235dbf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig"}}
	{"specversion":"1.0","id":"bf43c2dd-9a69-496e-8493-70df3f165d25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"4ef49f17-a7a4-4b59-a93b-277d8aa8fed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3d248e14-e08f-42e8-8c0f-184048ee6baf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube"}}
	{"specversion":"1.0","id":"9bd7f1a3-c856-4dd4-9b75-4221f783ca94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3dac77ae-be0c-4241-966b-1d957642822c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3e98e51e-b345-4c93-a156-749d86a49987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b931027a-c03a-473b-9f9d-db926f507356","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cfd023f-524c-4dd2-8e5d-b675c9905892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"a39f39ac-ee54-4615-a213-8ddafd5d1ddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-657000 in cluster insufficient-storage-657000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"412cae22-6013-40e2-a1d8-c377138553bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b4ad752-d6b9-41a8-8936-71c657f2da40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4abbfa2-1d2b-4e4d-8a04-8c60360a8ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-657000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-657000 --output=json --layout=cluster: exit status 7 (369.494409ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-657000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-657000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:28:18.456707   18981 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-657000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-657000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-657000 --output=json --layout=cluster: exit status 7 (369.2286ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-657000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-657000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 12:28:18.826443   18991 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-657000" does not appear in /Users/jenkins/minikube-integration/17885-10646/kubeconfig
	E0103 12:28:18.835984   18991 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/insufficient-storage-657000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-657000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-657000: (2.217243822s)
--- PASS: TestInsufficientStorage (10.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.56s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17885
- KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2302927008/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2302927008/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2302927008/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2302927008/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.56s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.83s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17885
- KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current383779511/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current383779511/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current383779511/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current383779511/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-442000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-442000: (3.446916256s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                    
x
+
TestPause/serial/Start (36.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-850000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0103 12:34:18.612793   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:34:39.440236   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-850000 --memory=2048 --install-addons=false --wait=all --driver=docker : (36.027503604s)
--- PASS: TestPause/serial/Start (36.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-850000 --alsologtostderr -v=1 --driver=docker 
E0103 12:34:58.429115   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-850000 --alsologtostderr -v=1 --driver=docker : (32.397696757s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-850000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-850000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-850000 --output=json --layout=cluster: exit status 2 (391.295735ms)

                                                
                                                
-- stdout --
	{"Name":"pause-850000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-850000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-850000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-850000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.54s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-850000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-850000 --alsologtostderr -v=5: (2.538910591s)
--- PASS: TestPause/serial/DeletePaused (2.54s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-850000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-850000: exit status 1 (52.718578ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-850000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (465.748763ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17885
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-009000 --driver=docker 
E0103 12:35:40.536358   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-009000 --driver=docker : (21.854546956s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-009000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --driver=docker : (5.507563065s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-009000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-009000 status -o json: exit status 2 (396.956675ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-009000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-009000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-009000: (2.248801229s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-009000 --no-kubernetes --driver=docker : (6.27654311s)
--- PASS: TestNoKubernetes/serial/Start (6.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (363.291222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-009000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-009000: (1.592970375s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-009000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-009000 --driver=docker : (7.270138507s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.726885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (38.120674064s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-thxbq" [c8a2609d-cb94-4ff6-87db-2eb8dea84d63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-thxbq" [c8a2609d-cb94-4ff6-87db-2eb8dea84d63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004675156s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0103 12:37:56.678860   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:38:01.483707   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (49.992869803s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lxq9n" [42b945b7-9925-44eb-a593-f649f15b2e94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005972613s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qr4gj" [aa549256-33a0-4a55-b779-36faeca4d4a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qr4gj" [aa549256-33a0-4a55-b779-36faeca4d4a3] Running
E0103 12:38:24.374630   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005645163s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0103 12:39:39.438507   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (49.93251973s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dhtsj" [42cefca8-e2c1-4070-91dc-d118708970b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005172077s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2wsmb" [b9f9f602-33e2-437b-816f-869fcef47c00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2wsmb" [b9f9f602-33e2-437b-816f-869fcef47c00] Running
E0103 12:39:58.424690   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006835435s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (37.551945764s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7f7vk" [1e90eeae-f88c-4ddf-91a8-f7154daf2e3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7f7vk" [1e90eeae-f88c-4ddf-91a8-f7154daf2e3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004015148s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (38.876037325s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (38.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0103 12:41:48.956679   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:48.962512   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:48.972767   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:48.992923   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:49.033545   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:49.114565   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:49.274790   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:49.595811   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:50.236797   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:51.518654   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:54.078757   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:41:59.198890   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:42:09.438917   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (38.732913354s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (38.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t75mk" [bfe2911a-43c1-415e-926f-df5b249a781c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t75mk" [bfe2911a-43c1-415e-926f-df5b249a781c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004664588s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-454fx" [a2596da2-8e2d-4763-8f80-96f00c7af678] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-454fx" [a2596da2-8e2d-4763-8f80-96f00c7af678] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003541573s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m9.518040434s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0103 12:42:56.674959   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:43:10.880760   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:43:12.410553   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.416527   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.426673   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.448352   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.489399   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.570336   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:12.730602   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:13.050961   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:13.691260   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:14.971454   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:17.531598   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:22.652143   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:32.892300   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:43:53.372288   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m16.074547512s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dccbf" [eaac4745-1438-4172-b44b-0d090af12462] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dccbf" [eaac4745-1438-4172-b44b-0d090af12462] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003573539s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gvmnb" [58864b6f-4cb3-4ce7-b0e3-fe1222ed8185] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005567474s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9wbhl" [e4c74224-24da-4dd1-b244-51047cb9e7fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9wbhl" [e4c74224-24da-4dd1-b244-51047cb9e7fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.023512365s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (38.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0103 12:44:39.434082   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:44:42.956715   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:42.962255   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:42.972837   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:42.993096   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:43.033243   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:43.113455   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:43.274109   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:43.594465   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:44.234662   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:45.515452   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:44:48.076125   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-236000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (38.407809942s)
--- PASS: TestNetworkPlugins/group/false/Start (38.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-236000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-236000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hzplw" [cac91022-0b1d-4665-887d-a8efd38f5126] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hzplw" [cac91022-0b1d-4665-887d-a8efd38f5126] Running
E0103 12:45:23.946319   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003511049s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-236000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-236000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (156.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-742000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0103 12:45:56.252729   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:46:02.796037   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:02.801174   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:02.811357   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:02.831453   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:02.871574   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:02.952277   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:03.112429   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:03.433708   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:04.107353   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:04.906041   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:46:05.387679   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:07.947819   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:13.068389   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:23.309423   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:43.789504   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:46:48.954114   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:47:16.638059   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:47:17.026023   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.031332   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.041835   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.061988   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.102256   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.183096   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.345318   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:17.666226   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:18.306488   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:18.830624   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:18.835828   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:18.846351   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:18.866561   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:18.906783   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:18.987537   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:19.147751   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:19.467936   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:19.587396   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:20.108489   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:21.390557   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:22.148328   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:23.951105   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:24.749404   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:47:26.825352   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
E0103 12:47:27.268608   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:29.071879   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:37.509138   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:39.312035   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:47:56.672265   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:47:57.989250   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:47:59.792650   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:48:12.406764   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-742000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (2m36.269294779s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (156.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb3a8cfa-2411-4b06-84bf-0a2954011c87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb3a8cfa-2411-4b06-84bf-0a2954011c87] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00509035s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-742000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-742000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08481977s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-742000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-742000 --alsologtostderr -v=3
E0103 12:48:39.152490   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:48:40.295103   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:48:40.956957   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:48:46.873471   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-742000 --alsologtostderr -v=3: (10.857735879s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-742000 -n no-preload-742000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-742000 -n no-preload-742000: exit status 7 (110.622311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-742000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-742000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0103 12:49:02.107027   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.112741   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.123085   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.144240   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.186335   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.268535   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.430342   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:02.751798   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:03.393919   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:49:04.674632   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-742000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m32.428252927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-742000 -n no-preload-742000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-079000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-079000 --alsologtostderr -v=3: (1.581280334s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-079000 -n old-k8s-version-079000: exit status 7 (110.163823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-079000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2l7cz" [642d2b29-d2aa-496c-9aa8-4104b78bb60d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0103 12:54:29.808508   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2l7cz" [642d2b29-d2aa-496c-9aa8-4104b78bb60d] Running
E0103 12:54:36.598840   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004316037s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2l7cz" [642d2b29-d2aa-496c-9aa8-4104b78bb60d] Running
E0103 12:54:39.642888   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:54:41.689274   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:54:43.168223   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00403312s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-742000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-742000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-742000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-742000 -n no-preload-742000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-742000 -n no-preload-742000: exit status 2 (398.741992ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-742000 -n no-preload-742000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-742000 -n no-preload-742000: exit status 2 (391.625608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-742000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-742000 -n no-preload-742000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-742000 -n no-preload-742000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0103 12:54:58.632809   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:55:16.454542   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (37.144266953s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d2bb0b8-222c-4f69-9b2b-584483006cdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d2bb0b8-222c-4f69-9b2b-584483006cdd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004972309s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-362000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-362000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054979434s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-362000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-362000 --alsologtostderr -v=3
E0103 12:55:44.140326   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/false-236000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-362000 --alsologtostderr -v=3: (10.93079902s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000: exit status 7 (109.316013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-362000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (314.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0103 12:56:03.009265   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
E0103 12:56:49.167618   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:57:17.242359   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/bridge-236000/client.crt: no such file or directory
E0103 12:57:19.047431   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kubenet-236000/client.crt: no such file or directory
E0103 12:57:56.888828   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/skaffold-736000/client.crt: no such file or directory
E0103 12:58:12.215910   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/auto-236000/client.crt: no such file or directory
E0103 12:58:12.624676   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/kindnet-236000/client.crt: no such file or directory
E0103 12:58:26.594417   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.599993   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.610796   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.630854   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.671379   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.751783   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:26.912535   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:27.232960   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:27.873309   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:29.153575   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:31.713942   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:36.834187   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:58:47.074887   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:59:02.120921   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/custom-flannel-236000/client.crt: no such file or directory
E0103 12:59:07.555784   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
E0103 12:59:08.920573   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/calico-236000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (5m14.175074094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000
E0103 13:01:03.018016   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/enable-default-cni-236000/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (314.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hbtpb" [fee392b2-c0b7-466f-b071-ec8c4f05d86f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0103 13:01:06.247708   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/flannel-236000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hbtpb" [fee392b2-c0b7-466f-b071-ec8c4f05d86f] Running
E0103 13:01:10.439989   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/no-preload-742000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004130049s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hbtpb" [fee392b2-c0b7-466f-b071-ec8c4f05d86f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005220603s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-362000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-362000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-362000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000: exit status 2 (395.846132ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000: exit status 2 (394.71746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-362000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-213000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-213000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (1m14.700883694s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-213000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [410db751-eeb9-4878-b262-d54b44776485] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [410db751-eeb9-4878-b262-d54b44776485] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005783907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-213000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-213000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-213000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051120855s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-213000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-213000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-213000 --alsologtostderr -v=3: (11.010065823s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000: exit status 7 (111.893298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-213000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-213000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-213000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m11.653126694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-clxrn" [3321042c-2bf4-4ca6-8e56-84019af41a40] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-clxrn" [3321042c-2bf4-4ca6-8e56-84019af41a40] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.00537592s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-clxrn" [3321042c-2bf4-4ca6-8e56-84019af41a40] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003826899s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-213000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-213000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-213000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000: exit status 2 (390.819179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000: exit status 2 (398.886392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-213000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-213000 -n default-k8s-diff-port-213000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-298000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-298000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (35.121470405s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-298000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-298000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.14296453s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-298000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-298000 --alsologtostderr -v=3: (6.007048842s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-298000 -n newest-cni-298000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-298000 -n newest-cni-298000: exit status 7 (110.82986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-298000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (28.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-298000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-298000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (27.81712926s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-298000 -n newest-cni-298000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (28.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-298000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-298000 --alsologtostderr -v=1
E0103 13:09:58.654172   11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-298000 -n newest-cni-298000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-298000 -n newest-cni-298000: exit status 2 (389.678886ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-298000 -n newest-cni-298000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-298000 -n newest-cni-298000: exit status 2 (393.062335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-298000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-298000 -n newest-cni-298000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-298000 -n newest-cni-298000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                    

Test skip (23/329)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 24.105286ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4c5td" [7097efcc-196c-4b8e-9d63-deaf5f782a11] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004022982s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8znhg" [eff40d5c-04a5-4e7f-bbb3-0116f0078a0b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005470161s
addons_test.go:340: (dbg) Run:  kubectl --context addons-927000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-927000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-927000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.873847238s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-927000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-927000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-927000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [996a2eed-6eea-4456-90a6-43bb660ad68b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [996a2eed-6eea-4456-90a6-43bb660ad68b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004542541s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-927000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.91s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-307000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-307000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rkgvn" [f39d53a0-50c0-4e74-b91a-d2cd96df9fc4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rkgvn" [f39d53a0-50c0-4e74-b91a-d2cd96df9fc4] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.006619408s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (15.15s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3948317200/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704312029489970000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3948317200/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704312029489970000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3948317200/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704312029489970000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3948317200/001/test-1704312029489970000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.536623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.438948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.957709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.708198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.056955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.888121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.86718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p": exit status 1 (354.831181ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-307000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3948317200/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (15.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3763518196/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.492523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.233486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.200665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.959195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.526829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.356238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.148478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p": exit status 1 (351.218547ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-307000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3763518196/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (15.24s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-236000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-236000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-236000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-236000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236000"

                                                
                                                
----------------------- debugLogs end: cilium-236000 [took: 5.893434968s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-236000
--- SKIP: TestNetworkPlugins/group/cilium (6.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-174000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard